An evaluation function that checks exact equality between a student response and the correct answer. Both inputs are cast to a specified type before comparison using Python's == operator. The function is deployed on the lambda-feedback platform.
app/
evaluation.py # Main evaluation_function
evaluation_tests.py # Unit tests
schema.json # JSON schema for input validation
requirements.txt # Python dependencies
Dockerfile # Container image for AWS Lambda
docs/
user.md # End-user documentation
dev.md # Developer reference
.github/
workflows/
test-lint.yml # Run tests and linting on pull requests
staging-deploy.yml # Deploy to staging on push to main
production-deploy.yml # Deploy to production
config.json # Evaluation function name
.gitignore
Send a POST request to the deployed function endpoint with the following JSON body.
| Field | Type | Required | Description |
|---|---|---|---|
response |
any | Yes | The student's submitted response |
answer |
any | Yes | The correct answer |
params.type |
string | Yes | Cast type for both inputs before comparison. One of: "int", "float", "str", "dict" |
params.display_submission_count |
boolean | No | If true, includes a submission count message in the feedback |
params.submission_context.submissions_per_student_per_response_area |
integer | No | Number of previously processed submissions for this student and response area |
Simple string comparison:
{
"response": "hydrophobic",
"answer": "hydrophobic",
"params": {
"type": "str"
}
}{
"is_correct": true
}Integer comparison:
{
"response": "45",
"answer": "45",
"params": {
"type": "int"
}
}{
"is_correct": true
}With submission count feedback:
{
"response": "1",
"answer": "1",
"params": {
"type": "int",
"display_submission_count": true,
"submission_context": {
"submissions_per_student_per_response_area": 2
}
}
}{
"is_correct": true,
"feedback": "You have submitted 3 responses."
}The function is built on top of a custom base layer, BaseEvaluationFunctionLayer, which provides tooling, testing helpers, and schema checking common to all evaluation functions.
The evaluation function is hosted on AWS Lambda and packaged as a Docker container. Docker bundles the function and all its dependencies into a single image, which AWS runs on demand inside a container. For background reading, see this introduction to containerisation and the AWS Lambda documentation.
Middleware provided by BaseEvaluationFunctionLayer handles request validation, error formatting, and response serialisation. The evaluation.py file only needs to implement the core comparison logic.
Three pipelines are configured in .github/workflows/:
test-lint.yml— runs on every pull request: executes the unit test suite and linting viaflake8.staging-deploy.yml— runs on every push tomain: re-runs tests, then builds the Docker image, pushes it to the shared ECR repository, and deploys to the staging environment.production-deploy.yml— promotes a tested build to the production environment.
app/docs/user.md— end-user guide explaining inputs and parametersapp/docs/dev.md— developer reference with detailed input/output specs and examples
To develop and test locally:
- Python 3.8 or higher
- Docker
gitCLI or GitHub Desktop- A code editor (VS Code, PyCharm, etc.)
For questions or issues, open a GitHub issue or visit the lambda-feedback organisation.