Validations API

The Validations API provides access to validation data from Project Sidewalk, where community members review and validate labels placed by other users. Each validation record includes the validator's judgment (Agree, Disagree, or Unsure), any changes made to tags or severity levels, and associated metadata.

This API is useful for researchers studying data quality, community validation patterns, and the reliability of crowdsourced accessibility data. The validation process helps ensure the accuracy and consistency of Project Sidewalk's accessibility information.

Validations API Preview

Below is a live preview showing validation result distributions by label type, retrieved directly from the API:

Loading validation data...

Endpoint

Retrieve validation data with optional filtering by various criteria.

GET /v3/api/validations

Examples

/v3/api/validations Get all validations in JSON (default)

/v3/api/validations?filetype=csv Get all validations in CSV format

/v3/api/validations?validationResult=1 Get only "Agree" validations

/v3/api/validations?labelTypeId=2&changedTags=true Get validations for label type 2 where tags were changed

Quick Download

Download validation data directly in your preferred format:

Query Parameters

This endpoint accepts the following optional query parameters to filter validation data.

Parameter Type Description
labelId integer Filter validations for a specific label by its unique ID.
userId string Filter validations performed by a specific user (validator).
validationResult integer Filter by validation result: 1 (Agree), 2 (Disagree), 3 (Unsure).
labelTypeId integer Filter validations for a specific label type (e.g., CurbRamp, Obstacle). See Label Types API
validationTimestamp string Filter validations performed after this ISO 8601 timestamp (e.g., "2023-01-01T00:00:00Z").
changedTags boolean Filter validations where tags were changed (true) or not changed (false).
changedSeverityLevels boolean Filter validations where severity levels were changed (true) or not changed (false).
filetype string Output format: json (default), csv. Note: shapefile is not supported for validation data.
inline boolean Whether to display the file inline (true) or as a download attachment (false, default).

Responses

Success Response (200 OK)

On success, the API returns an HTTP 200 OK status code and the requested data in the specified format.

JSON Format (Default)

Returns a JSON array containing validation objects:

[
    {
        "label_validation_id": 12345,
        "label_id": 67890,
        "label_type_id": 2,
        "label_type": "NoCurbRamp",
        "validation_result": 1,
        "validation_result_string": "Agree",
        "old_severity": 3,
        "new_severity": 4,
        "old_tags": ["narrow"],
        "new_tags": ["narrow", "steep"],
        "user_id": "user_abc123",
        "validator_type": "Human",
        "mission_id": 1234,
        "canvas_x": 186,
        "canvas_y": 148,
        "heading": 239.29465,
        "pitch": -15.819197,
        "zoom": 2,
        "canvas_height": 288,
        "canvas_width": 384,
        "start_timestamp": "2025-08-04T21:52:05.158Z",
        "end_timestamp": "2025-08-04T21:52:05.158Z",
        "source": "ExpertValidate"
    },
    {
        "validation_id": 12346,
        "label_id": 67891,
        "label_type_id": 1,
        "label_type": "CurbRamp",
        "validation_result": 2,
        "validation_result_string": "Disagree",
        "old_severity": 2,
        "new_severity": 2,
        "old_tags": ["wide"],
        "new_tags": ["wide"],
        "user_id": "user_def456",
        "validator_type": "AI",
        "mission_id": 1235,
        "canvas_x": 186,
        "canvas_y": 148,
        "heading": 239.29465,
        "pitch": -15.819197,
        "zoom": 2,
        "canvas_height": 288,
        "canvas_width": 384,
        "start_timestamp": "2025-08-04T21:52:05.158Z",
        "end_timestamp": "2025-08-04T21:52:05.158Z",
        "source": "SidewalkAI"
    }
]
JSON Field Descriptions
Field Type Description
label_validation_idintegerUnique identifier for the validation record.
label_idintegerID of the label that was validated.
label_type_idintegerID of the label type (e.g., 1 for CurbRamp, 2 for NoCurbRamp).
label_typestringName of the label type (e.g., "CurbRamp", "Obstacle").
validation_resultintegerValidation judgment: 1 (Agree), 2 (Disagree), 3 (Unsure).
validation_result_stringstringValidation judgment: Agree, Disagree, Unsure.
old_severityinteger | nullSeverity rating before validation (1-5 scale), or null if not applicable.
new_severityinteger | nullSeverity rating after validation, or null if not changed/applicable.
old_tagsarray[string]Array of tag names before validation.
new_tagsarray[string]Array of tag names after validation.
user_idstringID of the user who performed the validation.
validator_typestringOne of: Human, AI.
mission_idintegerThe mission ID associated with the validation.
canvas_xinteger | nullThe X-coordinate on the GSV canvas of the label when it was validated, or null if label was offscreen.
canvas_yinteger | nullThe Y-coordinate on the GSV canvas of the label when it was validated, or null if label was offscreen.
headingnumberCamera heading in GSV when the label was validated.
pitchnumberCamera pitch in GSV when the label was validated.
zoomnumberCamera zoom in GSV when the label was validated.
canvas_widthintegerWidth of the GSV canvas in pixels.
canvas_heightintegerHeight of the GSV canvas in pixels.
start_timestampstringISO 8601 formatted timestamp when the user started viewing the label for validation.
end_timestampstringISO 8601 formatted timestamp when the validation was performed.
sourcestringIn which interface the validation occurred: "Validate", "ValidateMobile", "ExpertValidate", "LabelMap", "GalleryImage", "GalleryExpandedImage", "GalleryThumbs", "GalleryExpandedThumbs", "SidewalkAI", "AdminUserDashboard", "AdminLabelSearchTab", "ExternalTagValidationASSETS2024"

CSV Format

If filetype=csv is specified, the response will be CSV data with the same field structure:

label_validation_id,label_id,label_type_id,label_type,...
12345,67890,2,NoCurbRamp,...
12346,67891,1,CurbRamp,...

Error Responses

If an error occurs, the API will return an appropriate HTTP status code and a JSON response body containing details about the error.

  • 400 Bad Request: Invalid parameter values (e.g., invalid validationResult, unsupported filetype).
  • 404 Not Found: The requested resource does not exist.
  • 500 Internal Server Error: An unexpected error occurred on the server.
  • 503 Service Unavailable: The server is temporarily unable to handle the request.

Error Response Body

Error responses include a JSON body with the following structure:

{
    "status": 400,
    "code": "INVALID_PARAMETER",
    "message": "Invalid validationResult value. Must be 1 (Agree), 2 (Disagree), or 3 (Unsure).",
    "parameter": "validationResult"
}

Validation Result Types

Project Sidewalk uses three validation result types to categorize validator responses:

ID Name Description
1 Agree The validator agrees with the original label placement and classification.
2 Disagree The validator disagrees with the original label placement or classification.
3 Unsure The validator is uncertain about the label's accuracy (e.g., due to image quality).

You can also retrieve this information programmatically from the/v3/api/validationResultTypes endpoint, which includes current count statistics for each validation result type. We include these counts for both Human and AI validations separately. You'll notice that there is no "Unsure" result type for AI; we only include AI validations if the AI is confident in its classification, but we haven't counted how often the AI is unsure.

Data Analysis Tips

The Validations API provides rich data for understanding community consensus and data quality. Here are some suggestions for effectively using this data:

  • Analyze agreement patterns by label type to identify which accessibility features are easier or harder for the community to classify consistently
  • Track tag and severity changes to understand how validation improves label accuracy and completeness
  • Use temporal filtering to study validation patterns over time
  • Cross-reference with user data to analyze validation behavior across different user types and experience levels
  • Map validation results geographically to identify neighborhoods or street types where labels are more controversial
  • Calculate inter-rater reliability by analyzing multiple validations of the same labels

Related APIs

For comprehensive validation analysis, consider using the Validations API alongside:

Contribute

Project Sidewalk is an open-source project created by the Makeability Lab and hosted on GitHub. We welcome your contributions! If you found a bug or have a feature request, please open an issue on GitHub.

You can also email us at sidewalk@cs.uw.edu

Project Sidewalk in Your City!

If you are interested in bringing Project Sidewalk to your city, please read our Wiki page.

On This Page