Submitty logo

We will present a range of “case study” assignment configurations, from simple through complex, using a variety of different automated grading methods including per-character and per-line output difference checkers, external unit testing frameworks (such as JUnit), memory debugging tools (Valgrind and DrMemory), code coverage (e.g., Emma), static analysis tools, and custom graders. Submitty can be customized per test case as appropriate to apply resource limits (running time, number of processes, output file size, etc.) and to display or hide from students the program output, autograding results, and testing logs.

Where to Store your Assignment Configurations

To allow backups and re-use of assignment configurations, we recommend that assignment configurations be prepared in a separate version controlled repository (e.g, GIT).

The assignment configuration may contain hidden input examples, solution output, and/or solution code that should not be publicly available to students. Thus, this repository should be private or shared only with other instructors or teaching assistants.

For example, we suggest storing these per course private repositories on the server, with controlled permissions. For example:


Overall Structure of an Assignment Configuration

You can study the Tutorial sample assignment configurations here:
Tutorial Example Configurations

And additional configuration examples are available.

Each assignment configuration will have a top level directory with a config.json file. Here is the general structure of a homework configuration directory:

   └── my_python_homework
       ├── config.json                   [ REQUIRED ]
       ├── custom_grader_code            [ OPTIONAL ]
       │   └── grader.cpp
       │   └── grader.h
       │   └── another_file.cpp
       ├── instructor_CMakeLists.txt     [ OPTIONAL ]
       ├── test_input                    [ OPTIONAL ]
       │   └── input_1.txt
       │   └── input_2.txt
       │   └── input_3.txt
       ├── test_output                   [ OPTIONAL ]
       │   └── output_1.txt
       │   └── output_2.txt
       │   └── output_3.txt
       └── test_code                     [ OPTIONAL ]
           └── instructor_code_1.cpp
           └── instructor_code.h

Phases of Autograding

First Phase: Compilation

  1. Create a temporary compilation directory and copy all of the students submitted files into the temporary directory.

  2. Copy the files from the test_code directory into the temporary directory.

  3. Scan through the testcases in the config.json for all testcases with type = “compilation”.

  4. Execute the “command”(s) for the compilation testcases.

  5. Copy the named executables from the temporary directory to the temporary execution directory. Also rename and copy the STDOUT.txt and STDERR.txt files from each compilation command (prefix with test case number).

Second Phase: Execution

  1. Copy the files from the test_input directory into the temporary execution directory.

  2. Scan through the testcases in the config.json for all testcases with type = “execution”.

  3. Execute the “command”(s) for the execution testcases.

  4. Rename the STDOUT.txt, STDERR.txt, execution logfiles, and specified output files that are to have been created by the program execution (prefix with test case number).

Third Phase: Validation

  1. Copy the files from the test_output directory into the temporary execution directory.

  2. Scan through the test cases in the config.json and perform the validation checks indicated within each check.

  3. Calculate the score for each test case, and determine what messages and files should be displayed for each test case.

  4. Write the results.json and results_grade.txt files.

Overall Specification of a config.json file

You are allowed to have C/C++ style comments in a config.json file. These will be removed before compilation of the autograding executables.

Specification of a Testcase

Types of Action

Specification of a Validation Object

Validation Methods