Open Source Python Multiple Choice Exam Reader API
Free Python Multiple Choice Exam Reader API that Enables Software Developers to Generate Exam Sheets, Scanners and Processing Software via Python API.
Open-MCR is a powerful Python library designed to simplify the implementation of Multichannel Convolutional Recurrent Neural Networks (MCRNNs). Developed by the talented programmer Ian San, this open-source project provides a flexible and efficient framework for researchers and developers working on tasks such as video analysis, speech recognition, and various time-series applications. Incorporating it into your projects can lead to more accurate and reliable model predictions, ultimately contributing to the advancement of machine learning in various domains.
Multi-Class Classification (MCC) is a common task in machine learning, where the goal is to assign a label to an input from a set of predefined classes. Open-MCR takes this a step further by incorporating a rejection option, allowing the model to abstain from making a prediction when it's uncertain. This is particularly valuable in scenarios where the model's confidence is low, leading to improved overall performance. At its core, Open-MCR leverages the strengths of both convolutional and recurrent neural networks to process and understand sequential data. Multichannel architectures enhance the model's ability to capture complex patterns and dependencies, making it a versatile tool for a wide range of applications.
Open-MCR library is designed with user-friendliness in mind. The library provides clear documentation and examples, making it accessible to both beginners and experienced practitioners. The straightforward implementation allows users to quickly adopt the library into their projects. As the library continues to evolve, it is poised to make significant contributions to the field of deep learning and sequential data processing. Whether you are a seasoned practitioner or a newcomer to the world of neural networks, Open-MCR is a tool worth exploring.
Getting Started with Open-MCR
The recommend way to install Open-MCR library is using pip . Please use the following command for a smooth installation.
Install Open-MCR Library via pip
pip install open-mcr
You can download the library directly from GitHub
Use Rejection Mechanism in Python Apps
Open-MCR's standout feature is its rejection mechanism, which empowers models to decline making a prediction when faced with ambiguous or challenging instances. This capability is crucial in real-world applications where avoiding incorrect predictions is paramount. Here is an example that shows how to use rejection mechanism with just a couple of lines of code. For a smooth usage, first Import the OpenMCR class from the library and initialize it with your preferred machine learning model. Set the rejection_threshold parameter to control the confidence level for rejection.
How to Use Rejection Mechanism inside Python Apps?
from openmcr import OpenMCR
from sklearn.ensemble import RandomForestClassifier
# Initialize Open-MCR with a RandomForestClassifier
model = RandomForestClassifier()
mcr_model = OpenMCR(model, rejection_threshold=0.3)
# Train the model
mcr_model.fit(X_train, y_train)
# Make predictions with rejection
predictions = mcr_model.predict(X_test)
Read & Extract Data from Images via Python API
The open source Python library Open-MCR has provided complete support for loading and reading data from images inside Python applications. For handling exams and reading sheets, first users need to scan all sheets using a standard scanner. Convert them into individual images and place them into a single folder, including the answer keys. In addition to reading scanned images, it can also automatically score the exam results. It does this by comparing the provided keys with the output. After the program finishes processing, results will be saved as CSV files in your selected output folder. The library also provided support for extracting data when reading slightly distorted images. With the latest release the library enhanced the grid finding algorithm and gets better results with your image. It establishes a grid based on the four corner marks and then reads bubbles from locations in the grid. The following example shows how to calculate results via Python code.
How to Calculate Results of Exam Bubble Sheet using Python API?
import csv
import pathlib
import typing as tp
import data_exporting
import grid_info
import list_utils
import math_utils
def score_results(results: data_exporting.OutputSheet,
answer_keys: data_exporting.OutputSheet,
num_questions: int) -> data_exporting.OutputSheet:
answers = results.data
keys = establish_key_dict(answer_keys)
form_code_column_name = data_exporting.COLUMN_NAMES[
grid_info.Field.TEST_FORM_CODE]
form_code_index = list_utils.find_index(answers[0], form_code_column_name)
answers_start_index = list_utils.find_index(
answers[0][form_code_index + 1:], "Q1") + form_code_index + 1
virtual_fields: tp.List[grid_info.RealOrVirtualField] = [
grid_info.VirtualField.SCORE, grid_info.VirtualField.POINTS
]
columns = results.field_columns + virtual_fields
scored_results = data_exporting.OutputSheet(columns, num_questions)
for exam in answers[1:]: # Skip header row
fields = {
k: v
for k, v in zip(results.field_columns, exam[:answers_start_index])
}
form_code = exam[form_code_index]
try:
if "*" in keys:
key = keys["*"]
else:
key = keys[form_code]
except KeyError:
fields[grid_info.VirtualField.
SCORE] = data_exporting.KEY_NOT_FOUND_MESSAGE
fields[grid_info.VirtualField.
POINTS] = data_exporting.KEY_NOT_FOUND_MESSAGE
scored_answers = []
else:
scored_answers = [
int(actual == correct)
for actual, correct in zip(exam[answers_start_index:], key)
]
fields[grid_info.VirtualField.SCORE] = str(
round(math_utils.mean(scored_answers) * 100, 2))
fields[grid_info.VirtualField.POINTS] = str(sum(scored_answers))
string_scored_answers = [str(s) for s in scored_answers]
scored_results.add(fields, string_scored_answers)
return scored_results
Improved Accuracy and Robustness
The open source Open-MCR library is a valuable addition to the toolkit of machine learning practitioners seeking to enhance the performance of their multi-class classification models. The rejection mechanism increases enhances the overall accuracy of multi-class classification models, especially in situations where traditional models might struggle. This mechanism also increases the robustness of the model by preventing it from making unreliable predictions, ensuring more dependable outcomes in diverse real-world scenarios.