Sphere Engine modules compared

Containers module Problems module Compilers module
All-round source code execution and skill assessment solution, capable of running any software stack in real-life scenarios Skills assessment component, perfect for testing algorithmic skills in server-side programming languages and where accurate execution time measurement is required Robust source code execution module with accurate execution time measurement
Is it good for: assessing programming skills?
in education, recruitment, e-learning, onboarding, and competitive programming
Yes Yes No
Is it good for: executing source code?
in interactive tutorials, coding playgrounds, and interactive documentation
Yes No Yes
Source code executed in a safe run-time environment (sandbox) Yes Yes Yes
High scalability
for large events and traffic peaks
Yes Yes Yes
Supported technologies
server-side/classical programming languages
Python, Java, C#, C++ and others
Yes Yes Yes
front-end technologies
JavaScript, React, HTML, CSS, and others
Yes No No
multi-stack applications,
e.g., Node.js+React+MySQL
Yes No No
data science and machine learning
Pandas, TensorFlow, Jupyter Notebook, etc.
Yes No No
mobile technologies
Flutter, React Native, and others
Yes No No
databases
MySQL, MongoDB, etc.
Yes Yes (SQLite) Yes (SQLite)
tools,
e.g., git
Yes No No
Managing technologies / programming languages You control what technologies, dependencies, libraries, etc. are included in your project. Programming languages are managed by the Sphere Engine Team. We can add or update languages and libraries on your request. Programming languages are managed by the Sphere Engine Team. We can add or update languages and libraries on your request.
How many technologies / programming languages are supported? Unlimited. You can install any technology you need. Each project may contain different libraries and frameworks. 80+. List of all supported programming languages: https://sphere-engine.com/supported-languages 80+. List of all supported programming languages: https://sphere-engine.com/supported-languages
Example content Yes, multiple projects for dozens of technologies Yes, multiple programming challenges, each compatible with all supported programming languages Yes, example programs with I/O processing
Is content cross-language compatible?
i.e., is it possible to execute submissions written in different programming languages in the context of the same project / programming challenge?
No. Projects are typically designed for specific technology stacks. Yes. By default, all programming challenges can be used with any of the supported programming languages -
Content Management System Yes (for managing projects) Yes (for managing programming challenges / problems) -
General features
Skill assessment / execution evaluation
built-in features for testing programming skills
Yes Yes No
Batch (non-interactive) execution Yes
via API and in the Workspace (Online IDE)
Yes
via API and in web widgets
Yes
via API and in web widgets
Interactive execution,
i.e., when the end-user can interact with the executed application via web browser, command line, etc.
Yes
in the Workspace (Online IDE)
No No
Internet access
does the executed application have access to the Internet
Yes (can be disabled) No No
Cost efficiency High to very high Very high Very high
Accuracy of the execution time measurement Normal High High
Multi-file submissions Yes (API and Workspace) Yes (API) Yes (API)
Multi-file content (projects / programming challenges) Yes
2 GB per project or more upon request
Yes
20 MB per programming challenge (not counting in test cases)
-
Integration
Integration via RESTful API Yes Yes Yes
Integration via web widget (with ready out-of-the-box end-user UI) Yes
via workspaces: fully customizable online IDE supporting web, desktop, mobile and interactive applications
Yes
via configurable widgets with built-in programming skills verification
Yes
via configurable widgets with a coding playground
JavaScript SDK for the web widget Yes Yes Yes
Content management via API
API for managing projects (the Containers module) and programming challenges (the Problems module)
Yes (coming soon) Yes -
Webhooks Yes Yes Yes
Detailed execution workflow
Key concepts
  1. Project - is configured by the content manager and contains a (partial) application and one or more scenarios.
  2. Scenario - describes workflow, behavior, and results of a given execution scenario. Consists of five stages.
  3. Stage - each of the five distinct stages (init, build, run, test, post) corresponds to a specific phase of the execution workflow.
Batch (non-interactive) mode:
  1. Checker - an agent capable of executing one submission at a time.
  2. Submission - launches a scenario. Can be created via API by your system. Contains the end-user's source code and a project ID.
Interactive mode:
  1. Workspace - is associated with a project and has a configurable UI. Can be created via API, embedded in your system, and displayed to the end-user.
  2. Execution - occurs when the end-user launches a scenario in a workspace.
  1. Programming challenge / problem - is configured by the content manager. Contains one or more test cases and a description of a problem to be solved by the end-user.
  2. Test case - contains input data and model output data.
  3. Test case judge - a program that validates results produced by the end-user's program in a given test case.
  4. Master judge - a program that summarizes partial results produced by the test case judges.
  5. Checker - an agent capable of executing one submission at a time.
  6. Submission - can be created via API (by your system) or a web widget (by the end-user). Contains the end-user's source code and the problem ID.
  1. Checker - an agent capable of executing one submission at a time.
  2. Submission - can be created via API (by your system) or a web widget (by the end-user). Contains the end-user's source code and input data.
Execution workflow: batch (non-interactive) mode
  1. A submission is created via API by your system and placed in the queue.
  2. The submission is pulled from the queue by the checker.
  3. A container is created based on the project image. The submission files (the end-user’s source code) are merged with the project files.
  4. A scenario is launched. The following stages are executed within the scenario:
    1. (optional) background services (e.g., MySQL) are launched;
    2. (optional) `init` - initializing files, resources, etc.;
    3. (optional) `build` - building the project, compiling the application, etc.;
    4. `run` - executing the submitted application;
    5. (optional) `test` - evaluating the execution results, the source code, etc.;
    6. (optional) `post` - preparing the data for extraction from the container.
  5. The results are now ready to be retrieved via API or webhooks.
  1. A submission is created via API (by your system) or web widget (by the end-user) and placed in the queue.
  2. The submission is pulled from the queue by the checker.
  3. (optionally) The checker creates a dedicated sandbox for the compilation process, and the submitted program is compiled.
  4. For each test case:
    1. The execution sandbox is created, and the submitted program is executed.
    2. A separate sandbox is created for the test case judge that evaluates the results produced by the submitted program.
  5. A sandbox is created for the master judge that summarizes partial results produced by the test case judges.
  6. The execution results are now ready to be retrieved via API, webhooks, or JavaScript SDK.
  1. A submission is created via API (by your system) or web widget (by the end-user) and placed in the queue.
  2. The submission is pulled from the queue by the checker.
  3. (optionally) The checker creates a dedicated sandbox for the compilation process, and the submitted program is compiled.
  4. The execution sandbox is created, and the submitted program is executed.
  5. The execution results are ready to be retrieved via API, webhooks or JavaScript SDK.
Execution workflow: interactive mode
  1. A workspace is created via API and displayed to the end-user in your system.
  2. A dedicated container is assigned to the workspace and remains active for the workspace's runtime.
  3. (optionally) The end-user can interact with the container via command line.
  4. The end-user launches a scenario. The same stages are executed in the scenario as in an API submission.
  5. Processes created during the scenario execution are killed. Changes in the file system are preserved.
  6. The execution results are ready to be retrieved via API, webhooks, or JavaScript SDK.
  7. The end-user can launch the scenario again.
- -
Skill assessment / execution evaluation logic The evaluation logic can be defined in the optional `test` stage. The Containers module supports multiple types of evaluation logic and comes with built-in tools supporting the content manager in implementing the testing process. The most common evaluation scenarios are
  1. Unit tests:
    1. `run` stage: unit tests of the tested application are executed.
    2. `test` stage: parses and analyzes the data from the previous stage.
  2. I/O:
    1. `run` stage: the tested application produces an output.
    2. `test` stage: validates the output (or other features) of the tested program.
  3. Web app testing tools:
    1. `run` stage: the tested web application is launched together with the test automation tool (Selenium, Cypress, etc.).
    2. `test` stage: parses and analyzes the data from the testing tool report.
  4. Custom:
    1. `run` stage: the tested application is executed.
    2. `test` stage: performs custom logic based on the results from the previous stage. Useful for testing databases, git, and other tools.
  5. Static code analysis (coming soon):
    1. `run` stage: static code analysis is performed on the evaluated application.
    2. `test` stage: parses and analyzes the output produced by the code analysis tool.
The Problems module uses I/O-based evaluation. The evaluation is performed in two steps:
  1. (for each test case) Test case judge validates the end-user's program based on the produced output (or, optionally, based on the program's source code).
  2. Master judge summarizes partial results produced by the test case judges.

The Compilers module is designed for simple and robust source code execution and has no built-in skill assessment features.

Suppose you're switching from your in-house compilator/checker, and you have evaluation logic already implemented in your system. In that case, you can use the Compilers module as a backend for executing end-users' source code. Note, however, that you will need to send input data and download output generated by the end-user's program individually for each submission. This can have an impact on the end-user's experience. For programming challenges with a large number of test cases or with large I/O data, we recommend using the Problems or Containers modules.

Output Customizable:
  1. evaluation result and score,
  2. interactive content (web pages, desktop applications, console applications),
  3. unit tests report,
  4. custom files (images, csv files, others),
  5. server logs,
  6. output and error streams (stdout, stderr),
  7. build / compilation error message,
  8. execution time,
  9. and others.
  1. Evaluation result and score,
  2. truncated output and error streams (stdout, stderr),
  3. compilation error message,
  4. execution time,
  5. memory consumption.
  1. Output and error streams (stdout, stderr),
  2. compilation error message,
  3. execution time,
  4. memory consumption.