2025 3rd IEEE International Conference on Industrial Electronics: Developments & Applications (ICIDeA)
979-8-3315-3320-5/25/$31.00 ©2025 IEEE
Innovative AI Approaches in Oral Proficiency
Testing: Ethical Implications of ASR Systems
1
A Periyasamy
Department of English,
Rajalakshmi Institute of Technology,
Chennai, India
Periyasamy.a@ritchennai.edu.in
4
M. Mythili
Department of English
Nandha Engineering College(Autonomous),
Erode, India
murugesanmythili@gmail.com
2
Geetha Manoharan,
School of Business,
SR University,
Warangal, Telangana, India.
geethamanoharan1988@gmail.com
5
Giftsy Dorcas E
Department of English
Kristu Jayanti College (Autonomous),
Bengaluru, Karnataka, India
giftsy@kristujayanti.com
3
Malik Bader Alazzam
Faculty of Information Technology,
Jadara University,
Irbid, Jordan
malikbader2@gmail.com
6
Prema S
Department of English
Panimalar Engineering College,
Chennai, India.
prema@panimalar.ac.in
Abstract—Oral proficiency testing plays a critical role in
language assessment; however, classic ASR faces such problems
as Americans Bias, which means difficulty for ASR as
application to recognize non-American accents, and the
evaluation inaccuracy. The major problem here is that classic
ASR systems cannot recognize and assess non-American accents
as they would reward non-biased, unbiased, and fair scores for
speakers with regional or non-native English accents. This bias
in particular distorts the fairness of the oral proficiency test. The
current approaches are inefficient with regards to the fairness
and standardization in natural language scenarios.
Unfortunately, in practice the conventional ASR models are not
perfect and become a source of errors and biases when trying to
assign a score for the given speech; the specific problem with the
current approach is that the speakers with regional or non-
native accent will be almost impossible to score correctly. To
address this bias, this study proposes BERTAdv, incorporating
BERT which focuses on contextual understanding into the
Adversarial training method for enhancing the ASR’s accuracy
in OPI. In an attempt to improve the BERTAdv to analyze both
simple and complex linguistic attributes, the model is trained on
a large and balanced dataset. The model presents desirable
accuracy with a high R-squared of 0.93, MAE of 0.26, and
RMSE of 0.46 and is more accurate and fairer than existing
methods.
Keywords— Oral Proficiency, ASR Systems, Ethical bias
mitigation, BERTAdv
I. INTRODUCTION
Oral language proficiency as a strand in communication
skills is central to the arrangements that are in place for
readiness for personal, career and civic life [1]. There are
professional talents used in views of communication skills,
these include talking skills because they fall under the general
communication skills that include speaking skills, and this
comprises vocal caliber, accent, tone, course, effective speech
and the mastery of meaningful dialogues [2]. Fluency, a
prerogative of classroom language learning and testing, is an
important component of language as it entails listening and
comprehending and responding in parallel [3]. Assessment
through receptive and productive skills has significant
meaning in language acquisition and certification [4]. It
appears in educational, professional, and social settings;
therefore, it serves to emphasize why testing methods must be
accurate, fair and accessible [5]. With the progress of AI
technology, the conventional testing method is not immutable
but can be improved; accordingly making oral proficiency
assessment less subjective and comprehensive to
accommodate the learners’ and institutions’ needs to master
the language more efficiently [6]. AI and ASR technologies
bring unbiased, standardized, and easily accessible solutions
as the process of scoring is automated, logistics constrains
removed and scalability is achieved [7]. This approach
increases fairness, offers formative feedback, and can
facilitate ongoing and mass language development assessment
of learners from around the globe [8].
This research proposal seeks to explore how these
innovations, specifically ASR, can revolutionize the manner
in which oral proficiency testing is done, eliminate evaluator
bias and make the process more accurate. Its goal is to increase
and facilitate test taking and administration through remote
testing while reducing reliance on face-to-face exams. For this
reason, AI’s capability of providing unique feedback with
backing from data enhances the learning process. Ethical
issues like privacy, data security concerns and issues
depending on the nature of training data fed to AI models are
also examined in the study. The key contribution of the study
is following,
• This study analysing the ethical implications of ASR
systems in oral proficiency testing, focusing on bias,
fairness, and transparency.
• This study Investigating the impact of accent and dialect
recognition on ASR accuracy in language assessments
• Evaluating the inclusivity of ASR systems for diverse
language learners, including those with speech
impairments.
• BERTAdv mitigates ethical biases related to non-native
accents and speech speed, ensuring a more equitable
evaluation for diverse linguistic speakers.
The research paper is organized in the following way: part
III introduces the methodology, while section II provides
access to the relevant literature. The fourth section presents
the research’s outcomes, while Section V concludes the study.
2025 3rd IEEE International Conference on Industrial Electronics: Developments & Applications (ICIDeA) | 979-8-3315-3320-5/25/$31.00 ©2025 IEEE | DOI: 10.1109/ICIDEA64800.2025.10963053
Authorized licensed use limited to: Panimalar Engineering College - Chennai. Downloaded on September 26,2025 at 07:31:58 UTC from IEEE Xplore. Restrictions apply.