Open Source Python Multiple Choice Exam Reader API
Free Python Multiple Choice Exam Reader API that Enables Software Developers to Generate Exam Sheets, Scanners and Processing Software via Python API.
What is Open-MCR Librar?
A robust Python package called Open-MCR was created to make Multichannel Convolutional Recurrent Neural Networks (MCRNNs) easier to deploy. This open-source project, created by the gifted programmer Ian San, offers academics and developers a versatile and effective framework for tasks like speech recognition, video analysis, and different time-series applications. Using it in your projects can result in more dependable and accurate model predictions, which will ultimately help machine learning improve across a range of fields. The Open-MCR library was created with ease of use in mind. The library is accessible to both novice and seasoned practitioners due to its comprehensive documentation and examples.
In machine learning, Multi-Class Classification (MCC) is a popular task where the objective is to categorize an input from a series of predetermined classes. By adding a rejection option, Open-MCR goes one step further and enables the model to forego generating a prediction in cases when it is unclear. This improves overall performance and is especially helpful in situations where the model's confidence level is low. Fundamentally, Open-MCR processes and comprehends sequential input by utilizing the advantages of both convolutional and recurrent neural networks. The model is a flexible tool for a variety of applications thanks to multichannel designs, which improve its capacity to identify intricate patterns and connections. Users can incorporate the library into their applications quickly because to its simple implementation. The library is positioned to significantly advance the fields of sequential data processing and deep learning as it develops further.
Getting Started with Open-MCR
The recommend way to install Open-MCR library is using pip . Please use the following command for a smooth installation.
Install Open-MCR Library via pip
pip install open-mcr
You can download the library directly from GitHub
Use Rejection Mechanism in Python Apps
Open-MCR's standout feature is its rejection mechanism, which empowers models to decline making a prediction when faced with ambiguous or challenging instances. This capability is crucial in real-world applications where avoiding incorrect predictions is paramount. Here is an example that shows how to use rejection mechanism with just a couple of lines of code. For a smooth usage, first Import the OpenMCR class from the library and initialize it with your preferred machine learning model. Set the rejection_threshold parameter to control the confidence level for rejection.
How to Use Rejection Mechanism inside Python Apps?
from openmcr import OpenMCR
from sklearn.ensemble import RandomForestClassifier
# Initialize Open-MCR with a RandomForestClassifier
model = RandomForestClassifier()
mcr_model = OpenMCR(model, rejection_threshold=0.3)
# Train the model
mcr_model.fit(X_train, y_train)
# Make predictions with rejection
predictions = mcr_model.predict(X_test)
Read & Extract Data from Images via Python API
The open source Python library Open-MCR has provided complete support for loading and reading data from images inside Python applications. For handling exams and reading sheets, first users need to scan all sheets using a standard scanner. Convert them into individual images and place them into a single folder, including the answer keys. In addition to reading scanned images, it can also automatically score the exam results. It does this by comparing the provided keys with the output. After the program finishes processing, results will be saved as CSV files in your selected output folder. The library also provided support for extracting data when reading slightly distorted images. With the latest release the library enhanced the grid finding algorithm and gets better results with your image. It establishes a grid based on the four corner marks and then reads bubbles from locations in the grid. The following example shows how to calculate results via Python code.
How to Calculate Results of Exam Bubble Sheet using Python API?
import csv
import pathlib
import typing as tp
import data_exporting
import grid_info
import list_utils
import math_utils
def score_results(results: data_exporting.OutputSheet,
answer_keys: data_exporting.OutputSheet,
num_questions: int) -> data_exporting.OutputSheet:
answers = results.data
keys = establish_key_dict(answer_keys)
form_code_column_name = data_exporting.COLUMN_NAMES[
grid_info.Field.TEST_FORM_CODE]
form_code_index = list_utils.find_index(answers[0], form_code_column_name)
answers_start_index = list_utils.find_index(
answers[0][form_code_index + 1:], "Q1") + form_code_index + 1
virtual_fields: tp.List[grid_info.RealOrVirtualField] = [
grid_info.VirtualField.SCORE, grid_info.VirtualField.POINTS
]
columns = results.field_columns + virtual_fields
scored_results = data_exporting.OutputSheet(columns, num_questions)
for exam in answers[1:]: # Skip header row
fields = {
k: v
for k, v in zip(results.field_columns, exam[:answers_start_index])
}
form_code = exam[form_code_index]
try:
if "*" in keys:
key = keys["*"]
else:
key = keys[form_code]
except KeyError:
fields[grid_info.VirtualField.
SCORE] = data_exporting.KEY_NOT_FOUND_MESSAGE
fields[grid_info.VirtualField.
POINTS] = data_exporting.KEY_NOT_FOUND_MESSAGE
scored_answers = []
else:
scored_answers = [
int(actual == correct)
for actual, correct in zip(exam[answers_start_index:], key)
]
fields[grid_info.VirtualField.SCORE] = str(
round(math_utils.mean(scored_answers) * 100, 2))
fields[grid_info.VirtualField.POINTS] = str(sum(scored_answers))
string_scored_answers = [str(s) for s in scored_answers]
scored_results.add(fields, string_scored_answers)
return scored_results
Improved Accuracy and Robustness
The open source Open-MCR library is a valuable addition to the toolkit of machine learning practitioners seeking to enhance the performance of their multi-class classification models. The rejection mechanism increases enhances the overall accuracy of multi-class classification models, especially in situations where traditional models might struggle. This mechanism also increases the robustness of the model by preventing it from making unreliable predictions, ensuring more dependable outcomes in diverse real-world scenarios.