Founder
August 4, 2025
34 min read
The use of artificial intelligence (AI) technologies in the field of distance education has gained considerable momentum worldwide, especially in the aftermath of the COVID-19 pandemic. AI-supported learning platforms, virtual assistants, and automated assessment tools have made it possible to offer personalized support to students. For example, the AI-supported "EBA Assistant," implemented on the EBA platform of the Ministry of National Education in Turkey, answered approximately 10 million messages within 6 weeks by providing instant responses to questions from students and parents during the distance education process. Similarly, the Academic Support Module on EBA was enriched with artificial intelligence to offer personalized questions and guidance, particularly for students preparing for exams, and was used by 1.17 million students. While these developments demonstrate the potential of AI applications in distance education, they also bring forth significant legal and ethical issues such as the privacy of personal data, algorithmic transparency, the design of fair and non-discriminatory systems, and accountability.
In this article, the current state of artificial intelligence use in distance education will be examined with a focus on Turkey; legal questions arising from data privacy in the context of the Law on the Protection of Personal Data (KVKK), the transparency of algorithmic decisions, the risks of bias and discrimination, and the use of artificial intelligence in decision support mechanisms will be addressed. Furthermore, the legal boundaries of applications such as student monitoring, automated assessment, and adaptive learning systems will be discussed, and administrative and ethical responsibilities, as well as oversight mechanisms, will be evaluated.
The proliferation of distance education in Turkey accelerated particularly in 2020, as millions of students turned to online platforms due to the pandemic. During this period, digital education platforms like EBA (Education Information Network) became among the most visited education sites in the world and included nearly 18 million students in uninterrupted education during the distance learning process.
Within EBA, efforts have been made to improve the student experience through the use of artificial intelligence technologies. For example, the EBA Academic Support module was enhanced with artificial intelligence to provide question recommendations based on student performance and received significant attention during the distance education period. In addition, the chatbot named EBA Assistant was launched in April 2020, using natural language processing techniques to understand students' questions and provide instant answers on topics such as obtaining passwords, learning class schedules, and communicating with teachers. The fact that this assistant answered nearly 10 million questions from 2.68 million users within six weeks also demonstrates the need for AI-based support during distance education.
Higher education institutions have also begun to research AI applications in distance education; particularly universities offering open and distance learning have experimented with integrating chatbots and learning analytics tools into their student support services. At Anadolu University, an artificial intelligence-based virtual advisor has been used in a pilot capacity since the 2010s to provide information to distance education students. In recent years, universities have been taking strategic steps regarding AI; within the scope of Turkey's National Artificial Intelligence Strategy (2021-2025), universities are assigned tasks such as opening graduate programs in the field of AI and increasing their R&D capacity. However, a recent study has revealed that awareness regarding the use of AI in education in Turkey is still low and that there is a lack of knowledge among stakeholders on how to implement AI. Therefore, for the efficient and secure adoption of AI in distance education, there is a need to both develop the technical infrastructure and strengthen the legal/ethical framework.
Artificial intelligence systems used in distance education generally operate by processing the personal data of students. Data such as student registration information, exam and assignment results, online activity data, and audio and video recordings can be analyzed by AI algorithms. This situation gives rise to significant obligations in Turkey under Law No. 6698 on the Protection of Personal Data (KVKK) and related legislation.
According to the KVKK, educational institutions are responsible for lawfully processing students' personal data; data processing activities must be for specific, explicit, and legitimate purposes, and the data collected must not be excessive and must be relevant to the purpose. In applications such as remote exam proctoring, the recording of students' camera images may be considered biometric data, thus falling into the "special categories of personal data" under the KVKK, which requires stricter measures. In this context, institutions are expected to fully carry out explicit consent processes, inform students about which data is collected and for what purpose, and, if necessary, offer alternative methods. For example, the KVKK's recommendations in the field of AI suggest creating a specific data protection compliance program for each project, ensuring data minimization, and conducting a Privacy Impact Assessment (PIA) in high-risk situations.
Data security is also a critical element in this context. Distance education platforms are obligated to protect students' personal data against cyberattacks. While the KVKK mandates the implementation of appropriate technical and administrative measures, it also places a responsibility on the developers of artificial intelligence systems to comply with the principles of "privacy by design." In 2021, the KVKK in Turkey published "Recommendations on the Protection of Personal Data in the Field of Artificial Intelligence," emphasizing the need to protect fundamental rights and freedoms in AI applications. These recommendations include principles such as being transparent in data processing, granting data subjects the right to stop the processing of their data and to have it erased, and using anonymized data.
Specifically, students and parents must be informed about the data collected in distance education systems and must be able to exercise their rights over this data (access, rectification, erasure, etc.). Otherwise, under the KVKK, students can file a complaint against the educational institution as the data controller, and administrative sanctions may be imposed by the Board. In conclusion, the use of AI in distance education must be built upon a strong privacy and data protection infrastructure. This is essential for both legal compliance and for students to have trust in digital environments.
The use of artificial intelligence algorithms in education brings the principle of algorithmic transparency to the forefront. Systems that operate as a "black box," whose decision-making processes are incomprehensible, can lead to trust issues for students and educators.
Legally, under the KVKK and general principles of law in Turkey, individuals have the right to receive information about processes concerning them. Although the KVKK does not have a specific article regarding automated decision-making like the GDPR does, the obligation to inform requires that if students are subject to an assessment by an AI, information about the method must be provided as a necessity of transparency. The KVKK's AI guide is also based on the principle of accountability, recommending to developers and decision-makers that algorithms be auditable and explainable. In this context, it is important for educational institutions to provide at least a minimum level of transparency about the functioning of the AI tools they use. For example, if a university uses an automated exam grade recommendation system, it should be explained to students that this grade was suggested by an algorithm and which criteria were taken into account, and there should be a human intervention mechanism to review the matter in case of a student's objection. Indeed, the KVKK recommendations envision granting individuals the right to object to automated processes concerning them and providing the opportunity to halt the data processing activity if necessary.
Algorithmic transparency is not limited to just informing the user; it is also related to the system's explainability. The ability to audit the basis of artificial intelligence decisions allows for the detection of unfair or erroneous results. In the field of education, this means, for example, being able to understand the criteria by which an AI-supported assessment tool assigns scores, or on the basis of which data an adaptive learning system recommends specific content to a student. The European Union's forthcoming AI legislation (the AI Act) presents a clear approach on this matter, imposing an obligation on high-risk AI systems to provide information in a way that users can understand. Although there are no specific binding rules for AI in Turkey yet, the general principle of transparency is applied across a wide spectrum, from administrative law to contract law, and educational institutions are also obligated to be transparent with their students. Therefore, it would be good practice to request "white-box" models or explanation interfaces from the suppliers of AI systems used in distance education, or at least to provide summary reports on the decision-making processes. Ultimately, the principle of accountability requires educational institutions to take the necessary measures to avoid the errors or negative impacts of AI; otherwise, they should not forget that legal and administrative responsibilities will arise. In fact, the Draft Artificial Intelligence Law prepared in Turkey (though not yet enacted) also presents compliance with the principles of safety, transparency, equality, accountability, and privacy in the field of artificial intelligence as a general framework.
The use of biased or incomplete datasets for training AI systems gives rise to the problem of algorithmic bias. When artificial intelligence is used in education, these biases can lead to the systematic disadvantaging of certain student groups. For example, in 2020 in the United Kingdom, when exams were canceled due to the pandemic, students' grades were predicted by an algorithm; however, because this grading algorithm was based on historical school performance, it lowered the grades of successful students in disadvantaged schools while relatively favoring students in private schools. When it was understood that the algorithm reinforced social inequalities, it was met with intense backlash, and the government was forced to cancel the algorithm's results. This event is a striking example of the potential for AI use in education to create a discriminatory impact.
From a legal perspective, the principle of equality and the prohibition of discrimination in the Turkish Constitution command educational institutions to avoid practices that lead to direct or indirect discrimination. If an artificial intelligence system produces discriminatory results based on factors such as race, gender, or socio-economic status, the public institution applying these results can be sued in administrative court to have the action annulled, or liability for compensation may arise. Furthermore, the KVKK's AI recommendations suggest that developers should collaborate with academic institutions if necessary and seek opinions from impartial experts to identify potential biases that could create a risk of discrimination. This situation can be considered not just a technical problem but also a legal obligation, as using an AI application whose biases have not been addressed would constitute discrimination through negligence.
Looking at international examples, we see that the prohibition of discrimination has been extended to the use of AI. In the US, under federal civil rights legislation, discrimination based on race or gender in schools is prohibited, and the Department of Education investigates violations in this area. Indirect discrimination by artificial intelligence tools can also be examined within this scope. For example, if an AI-based disciplinary monitoring system used in a US school disproportionately flags Black students, this situation could be considered a violation of Title VI (of the 1964 Civil Rights Act). Similarly, a career guidance algorithm that contains gender stereotypes could lead to a Title IX violation.
In the face of such risks, civil society organizations and students in the US have already begun to raise their voices. Notably, the digital rights organization Electronic Privacy Information Center (EPIC) filed a complaint in 2020 on the grounds that some popular exam proctoring software produced biased and erroneous results against students. The European Union's approach is also clearly anti-discriminatory: in the recitals of the draft AI Act, it is specifically emphasized that poorly designed or misused educational artificial intelligence systems could violate the right to education and the right not to be discriminated against. For this reason, the EU regulation considers it a prerequisite for high-risk AI systems used in education to be compliant with the principle of equality, mandating that datasets be as unbiased and representative as possible and that the results undergo regular bias testing.
In conclusion, special attention must be paid to the principles of justice and inclusivity during the design and use stages of artificial intelligence applications in distance education. Both developers and educational institutions should monitor the effects on different groups and proactively correct any discriminatory outcomes that may arise. Otherwise, there is a risk of facing both legal sanctions and eroding public trust.
In education, artificial intelligence affects not only the learning experience of students but also administrative decision-making processes. As decision support systems, AI can be used in many areas, from student admissions to scholarship allocation, from program placement to early warning systems. For example, some universities are considering using algorithms to score applicants or to predict their "probabilities of success." Similarly, teachers or educational administrators can benefit from AI tools that predict student success and identify those at academic risk. Although such applications have the potential to support data-driven decisions and bring objectivity and speed, it is necessary to carefully draw the legal boundaries in this context.
First and foremost, the difference between "decision support" and "decision making" must be emphasized. Legally, it is considered problematic to leave decisions that have significant consequences entirely to automated systems. Article 22 of the EU General Data Protection Regulation (GDPR) defines it as a right for individuals not to be subject to decisions based solely on automated processing which produce legal effects concerning them or similarly significantly affect them (with some exceptions). Although there is no such explicit article in the KVKK, the understanding that decisions affecting students' academic lives should not be left solely to AI is accepted, in light of the fundamental principles of the KVKK and developments in Europe. Indeed, the KVKK's AI guide also states that when automated processes create an effect on individuals, the person should be given the opportunity to express their own point of view and be offered alternative paths to fully automated decisions. In practice, this means ensuring that every AI-supported decision is reviewed or approved by a human. For example, if an artificial intelligence system generates a risk score regarding whether a student will be successful, this score should only be a recommendation; any sanction against the student (such as placing them in a support program or taking disciplinary action) must be evaluated by an official beforehand.
The use of AI in decision support systems also brings with it the issue of the distribution of responsibility. If a university administration makes a negative decision about a student based on the recommendation of an AI tool, and it is later understood that this decision was unfair, who will be responsible? In Turkish law, when administrative decisions are found to be unlawful, the institution is responsible; it is not possible to evade responsibility by saying, "The AI said so." Therefore, administrations that use AI for decision support purposes must not forget that the ultimate responsibility lies with them and should position AI tools merely as an auxiliary element. From the point of view of legal oversight, if a student objects to an AI-derived decision, the judicial authorities will examine whether the reasoning for the decision is reasonable and lawful. Thus, it becomes important for decision support systems to keep traceable records that can show what information was evaluated and how.
The European Union's draft AI Act defines decision support applications related to education and vocational training (especially systems that determine access to educational opportunities or significantly affect a student's education) as "high-risk." In this context, criteria such as risk management, quality measurement, human oversight, and transparency will become mandatory for student admission systems, grading tools, and exam proctoring artificial intelligence. The draft regulation even plans to ban AI applications in the educational field that attempt to measure the emotional state or attention of students. This points to red lines that should not be crossed even in the use of AI for decision support: trying to decipher the private emotions of students poses a greater risk of rights violations than it offers pedagogical benefit. It can be expected that in the future, similar specific regulations regarding the use of AI in educational decisions will be implemented in Turkey as well. The currently prepared Draft Turkish Artificial Intelligence Law mentions that AI systems can be defined as "high-risk" and foresees that such systems should be registered and continuously monitored by authorized authorities. Although the details are not yet clear, this approach could also be applied to AI decision support applications in the field of education.
In summary, the use of AI for decision support purposes in distance education can help human decision-makers gain speed and objectivity. However, for legal certainty, it is essential that AI-supported decision processes are transparent, auditable, and under human control. No decision that affects a student's future should be left solely to an algorithm by completely disabling the human element. Otherwise, both individual grievances and legal violations may become inevitable.
Student monitoring software has become widespread for ensuring exam security in distance education. This software records video/audio of students during an exam using their cameras and microphones, can monitor their screen activity, and attempts to detect suspicious behavior using artificial intelligence. From a legal standpoint, serious privacy and confidentiality issues arise here. The continuous observation of a student's room and behavior while they take an exam from their own home constitutes a form of digital surveillance, even if it is for an educational purpose. A precedent-setting case in this regard in the US is the lawsuit filed by a student at Cleveland State University in Ohio: the student claimed that the scanning of their room with a webcam before the exam was a violation of privacy under the 4th Amendment of the Constitution as an unreasonable search, and the federal court ruled in favor of the student, deciding that this practice was unconstitutional. The court emphasized that the privacy of a student's home outweighs the justification of exam security. This decision has placed a significant limit on practices such as pre-exam room scans by universities in remote exams. Although no such constitutional case has yet occurred in Turkey, the principles of the inviolability of the home and the protection of private life could be evaluated in a similar manner. Conducting a scan that shows the inside of a student's home without their consent is seen as problematic in terms of fundamental rights. Moreover, from the perspective of the KVKK, such an image recording can only be processed with explicit consent and under very strict security measures; otherwise, it would be considered a data breach.
Another dimension of student monitoring software is the use of artificial intelligence techniques such as facial recognition and behavior analysis. For example, a software program can create a risk score for cheating by analyzing a student's eye movements, facial expressions, or keyboard usage speed. These systems have a probability of generating false alarms and have, in some cases, been alleged to be biased against certain groups. For instance, it was suggested that online proctoring software used by some universities in the Netherlands during the pandemic had difficulty recognizing the faces of students with darker skin tones and frequently flagged them as "not present," which led to debates on racial discrimination. The issue drew the attention of the Dutch Data Protection Authority, which launched an investigation in 2020 into the GDPR compliance of such software. Ultimately, a court in the Netherlands ruled that online proctoring could continue in cases where sufficient measures were taken and student consent was obtained; furthermore, the allegation of discrimination based on skin color was not proven. Nevertheless, this process holds important lessons regarding the transparency and oversight of student monitoring software. The need has emerged both for making algorithms fairer from a technical standpoint and for ensuring that students' participation in these applications is, as much as possible, on a voluntary basis. In practice, due to these concerns, some universities have opted for methods such as open-book exams or project assignments in remote exams instead of using proctoring software.
Another use of AI in distance education is the automation of exam and assignment assessments. While the automated grading of multiple-choice exams has been practiced for years, AI has now made text analysis-based assessment of short answers or essays possible. For example, some systems can score essays written by students based on criteria such as grammar, coherence, and argument quality. Legally, the most important issue here is the fairness of the assessment and the appeal mechanism. If a student's grade is determined entirely by an AI system, to whom will the student appeal if they believe the grade is unfair? For this reason, in a potential regulation by authorities like YÖK or the Ministry of National Education, conditions could be introduced, such as the verification of automated assessment results by at least one human instructor or a review by a teacher if the student requests a re-evaluation. Otherwise, the student's right to education and the principle of transparency in assessment could be undermined. Another issue is the risk of automated assessment systems being unable to correctly evaluate different languages or forms of expression. Especially in languages with complex grammatical structures like Turkish or with creative expressions, it is possible for an AI system to miss nuances. This could also create an indirect inequality. Therefore, automated grading tools should only be used as a supportive tool, and the final decision-making authority should still be the course instructor.
Another common AI application on distance education platforms is the adaptation of learning materials and activities according to a student's individual performance. These systems personalize the content that will be presented to the student in the next step by looking at the student's previous topic achievements, learning speed, and preferences. Legally, although adaptive systems may not seem as controversial as assessment or monitoring, they still have dimensions of equality and the right to access. For example, if an adaptive system continuously offers "low-level" content to a student and never directs them to advanced material, that student may have been unknowingly offered a lower learning opportunity. In this case, the system's algorithm "labels" the student and "locks" them at a certain level, which can be pedagogically problematic and may also conflict with the principle of equal opportunity in education. For this reason, elements of diversity and exploration should be preserved in the design of adaptive systems; students should be given the opportunity to go beyond the system's recommendations and make their own choices. Another point is that the intensive data tracking by adaptive systems can create a sense of constant surveillance for the student. Stress and anxiety may increase in students who know that their every click and every response time is being recorded. This can also indirectly become a factor affecting the right to education. Therefore, adaptive systems must strictly adhere to the principle of privacy when processing student data, collect only the necessary data, and operate with the students' permission/consent.
The common denominator in all these applications is the principles of proportionality and lawfulness. The opportunities offered by AI in distance education are certainly valuable; however, in using these opportunities, the fundamental rights of students (such as privacy, the right to education, equality, data protection, etc.) must not be violated.
The use of artificial intelligence applications in distance education creates administrative liability for public institutions (state schools, universities). In a public university, for example, the decision to use an AI system for exam proctoring is an administrative act of that institution and must be lawful. If the rights of students are violated because of this AI system (e.g., a student being unfairly punished due to a false cheating detection), the student can file a lawsuit against the administration to request the annulment of the decision and compensation for their damages. From the perspective of administrative law, issues of service fault or the administration's liability may arise. Therefore, educational institutions should conduct legal risk analyses before using AI tools, test the systems with pilot applications if possible, and anticipate problems. Furthermore, responsible units for the use of these technologies should be designated within the administration; for example, a "Digital Education and Artificial Intelligence Commission" could be established at a university to ensure continuous improvement by also collecting feedback from students.
Beyond the minimum legal requirements, the use of AI in education is an ethical issue. Academic institutions must use technology without undermining public trust. For this reason, many international organizations have prepared ethical principle guides for AI in education. UNESCO's 2021 Recommendation on the Ethics of Artificial Intelligence emphasizes the protection of human-centered values, respect for privacy, stakeholder participation, and sensitivity to cultural differences in the use of AI in education. The OECD has similarly addressed the education sector within its AI Principles. In Turkey, too, the Ministry of National Education's YEĞİTEK directorate has drawn particular attention to ethical principles in artificial intelligence workshops. Although such steps are not yet legal obligations, they establish self-regulation mechanisms to ensure the responsible use of AI.
The effective oversight of the use of artificial intelligence applications in education is possible through both internal and external mechanisms. Internally, the aforementioned commissions or information technology units should regularly monitor the performance, errors, and data security of AI systems. The concept of algorithm auditing becomes important here: the outputs of the AI system should be examined at regular intervals to measure whether it contains bias or makes unexpected errors. If necessary, the auditing of the systems can be ensured by inviting independent experts or researchers (for example, a university could have the AI-based assessment tool it uses reviewed by a research group from its IT faculty, and receive a report). The draft AI Law in Turkey foresees the registration of high-risk AI systems and their continuous monitoring by "authorized supervisory authorities." Although this supervisory authority has not yet been clarified (for example, whether it will be the KVKK or the Digital Transformation Office), it is likely that AI systems in education will also be included in this scope when the law is enacted. Under the European Union's AI Act, member states will also be required to establish national supervisory authorities to register and monitor high-risk systems.
In addition, certification and standards are also an oversight tool. In the KVKK's AI recommendations, the development of rules and certification mechanisms for AI applications was suggested. This indicates that in the future, the software used in education may be required to have certain certifications. For example, if an exam proctoring software has received a "Student Privacy Compliance" certificate from an independent organization, universities can make their selection by looking at this certificate. Likewise, the EU AI Act also foresees a conformity assessment for high-risk systems similar to CE marking.
Finally, stakeholder participation is an important part of oversight. Students, parents, and teachers, as the groups who experience the effects of AI use firsthand, should be included in the process through feedback mechanisms. For example, if a university uses artificial intelligence for grading, it should collect satisfaction and fairness perception surveys from students at the end of the term to identify and address potential problems. This way, ethical and operational issues can be identified and corrected before a legal dispute arises.
The European Union is working on a comprehensive regulation named the AI Act, which systematically addresses the use of artificial intelligence in various sectors. This regulation, which reached its final stages as of 2023, is poised to be the world's first comprehensive AI legislation. The AI Act adopts a risk-based approach, classifying AI systems according to their risk levels and imposing obligations accordingly. The education sector has been identified as one of the high-risk areas in the draft AI Act. Specifically, AI systems used in education or vocational training that determine individuals' access to education or evaluate their success (for example, student admission algorithms, grading tools, or exam proctoring systems) fall into the high-risk group. The providers and users of AI applications in this category will be subject to a series of strict obligations:
High-risk AI systems must undergo a comprehensive risk assessment before being placed on the market, and the impact of the margin of error in education must be minimized. For example, a grading algorithm will be tested on diverse demographic groups to ensure that it operates consistently.
The technical documents of these systems, information on how the algorithm works, and user manuals must be kept ready for presentation to the competent authorities. Additionally, the system must log critical decision steps during its operation to allow for auditing when necessary.
The AI Act imposes an obligation to inform the users of these systems (whether they be students, teachers, or institutional administrators) that the system contains AI and to provide a rough explanation of how it works. This may mean that a student would need to be given information such as, "This exam was assessed with the support of artificial intelligence." Furthermore, it must be specified that content generated by artificial intelligence (for example, automated feedback texts) is of this nature.
Human oversight is held to be mandatory for high-risk applications. In education, this practically means that teachers and officials must be able to change the decisions made by the AI when necessary and to stop the system from an erroneous result in which it is overconfident. The AI Act introduces the principle that "systems should not make critical decisions without human intervention and oversight being ensured."
Systems will be required to achieve a certain level of accuracy and to operate stably, and their performance will be monitored during use to ensure it does not degrade. For example, a facial recognition-based identity verification system would be expected to comply with thresholds for both false negative (not recognizing the student) and false positive (mistaking someone else for the student) error rates.
The data used to train AI systems in education will need to be as representative, error-free, and unbiased as possible. Additionally, the method of collecting this data must be compliant with GDPR, and the data should be able to be anonymized when necessary.
The AI Act also defines prohibited practices. In the educational context, the most notable prohibition is the ban on the use of artificial intelligence to infer the emotional state of students. Recently, some companies have been trying to analyze students' emotions from their webcam images to measure their engagement in class or to provide real-time feedback based on facial expressions. The EU has stated that such practices are contrary to student psychology and privacy and has determined that emotion inference in an educational setting is unacceptable under any circumstances. Manipulative tools (for example, hidden algorithms designed to influence a student's decisions) are also prohibited.
The AI Act is expected to be adopted by the end of 2024 and to come into force after a transition period of 2-3 years. This regulation will also indirectly affect Turkey; on one hand, it is likely that Turkey will adopt similar standards as part of its EU harmonization process, and on the other hand, international educational technology companies will update their products according to EU norms. Indeed, these developments in the EU have brought the concept of "trustworthy AI" to the forefront globally. Educational technology providers are already announcing that they have begun work to make their products compliant with the AI Act. For example, a Netherlands-based edtech company has declared that it will always apply principles such as putting pedagogy before technology, transparency, human oversight, and granting the right to appeal in its artificial intelligence tools. Such good examples show that the EU regulation is already beginning to shape industry behavior.
At the federal level in the US, there is not yet a comprehensive AI law like the one in the EU. Instead, the approach is to adapt existing laws to the use of AI and to govern with some guiding principles. In the field of education, legislation centered on student privacy and civil rights provides the framework for the use of AI. For example, the Family Educational Rights and Privacy Act (FERPA) protects the privacy of student records in schools and grants parents the right to access/rectify these records. If a school uses an AI-based application to transfer a student's data to a third party, it must act in compliance with FERPA; otherwise, it risks its federal funding. The Children's Online Privacy Protection Act (COPPA) requires parental consent for the collection of data from children under the age of 13. Therefore, online AI tools used in elementary and middle schools must obtain parental consent as required by COPPA. In the US, many student privacy laws have also been enacted at the state level. According to a report by the Future of Privacy Forum, more than 128 student privacy laws have come into force in the states since 2013. These laws detail the terms of data use and security in the contracts that schools make with technology suppliers. For example, a law in California has prohibited educational technology providers from using student data for advertising purposes (Student Online Personal Information Protection Act).
Furthermore, anti-discrimination legislation is also applied to the use of AI. The U.S. Department of Education's Office for Civil Rights oversees whether there is discrimination in schools based on race, color, national origin (Title VI), sex (Title IX), or disability (ADA and Section 504 of the Rehabilitation Act). If an artificial intelligence application is not accessible to students with disabilities (for example, not being compatible with a screen reader for the visually impaired), this could be raised as a violation of the ADA. Apart from this, the US administration has also put forward some guiding principles. In 2022, the White House published the "Blueprint for an AI Bill of Rights," outlining five core principles: (1) Safe and Effective Systems, (2) Algorithmic Discrimination Protections, (3) Data Privacy, (4) Notice and Explanation, and (5) Human Alternatives, Consideration, and Fallback. Although these principles are not legally binding, they were prepared to guide public agencies in the use of technology.
In conclusion, the use of artificial intelligence in distance education is an area that will inevitably grow, and the regulatory framework will develop in parallel. Turkey has already taken steps in data protection with laws like the KVKK and has set forth its principles with its national AI strategy. However, as distance education becomes a permanent component, it will be necessary to create a balanced legal framework that protects student rights and encourages innovative applications. In this context, legislative updates, internal guidelines, awareness training, and stakeholder participation should be the four main pillars.
The use of artificial intelligence in distance education represents a new era in education while also requiring a multi-dimensional legal review. The current situation in Turkey shows that a cautious transition to AI has begun, thanks to the digitalization accelerated by the pandemic. The EBA platform and some university initiatives have offered examples of how AI can increase efficiency in education. However, the implementation of these technologies brings with it risks such as the protection of personal data, the violation of students' private lives, algorithmic errors, and biases.
As we have discussed in this article, ensuring data privacy and security under the KVKK is one of the most fundamental obligations. Student data should only be collected lawfully and to the extent necessary, and artificial intelligence systems must respect the rights of data subjects when processing this data. Algorithmic transparency and accountability are essential for establishing trust in a public domain like education; students and parents should be informed about what the AI is doing and be able to object to errors. Algorithms that produce unfair or discriminatory results are neither technically nor legally acceptable – the use of an artificial intelligence that undermines the principle of equal opportunity in education will do more harm than the intended benefit.
Furthermore, the principle of proportionality must be observed in applications such as student monitoring and automated assessment. Just because it is technologically possible does not make it legitimate to monitor students all the way into their homes or to record their every step. The question, "is there a less intrusive method?" should always be asked here. For example, different exam designs can be considered for remote exam security instead of room scans. In AI applications aimed at supporting students, such as adaptive learning systems, transparency and respect for student choice should be prioritized.
Administrative and ethical responsibilities call for institutions to act proactively. The use of artificial intelligence in education should be treated not as a one-time decision, but as a process that requires continuous monitoring and development. In this context, school and university administrations should establish their own internal control mechanisms and oversee the process with structures such as data protection officers and ethics committees. Oversight mechanisms should not just exist on paper; they should be operationalized through regular reporting, handling of complaints, and being open to external audits. For example, a university could publish an "AI Applications Impact Report" annually to share both what is going well and the problem areas with the public.
International developments are valuable in guiding Turkey. The EU's AI Act will likely set the global standards for AI in education, and Turkey will probably try to align with these standards. Case examples from the US, on the other hand, warn us about potential pitfalls: for example, non-transparent algorithms or excessive monitoring can be halted by judicial decisions, and the reputations of institutions can be damaged. Therefore, adopting a "design for values" approach that is compliant with law and ethical values from the very beginning is the most direct path.
In conclusion, the use of artificial intelligence in distance education is a matter of balance. On one hand, there is innovation, efficiency, and personalized educational opportunities; on the other, there is privacy, equality, trust, and accountability. To achieve this balance, tasks fall to legislators, education policymakers, and practitioners alike. Concrete steps will include updating legislation (for example, specific regulations like a "Regulation on the Use of Digital Technologies in Education" may be made in the future), increasing awareness and training efforts (such as providing data privacy and AI literacy training to teachers and students), and making the technological infrastructure more transparent (such as promoting open-source algorithms).
It must not be forgotten that the ultimate goal of technology in the field of education should be to improve the learning experience and to increase equal opportunity. Artificial intelligence can be a powerful tool in achieving this goal; however, if it is not guided by the right principles, it can be diverted from its purpose. Law and ethics provide this guidance, helping AI-supported distance education to be implemented in a safe, fair, and effective manner. In an environment where this balance can be achieved, students, parents, and educators alike will be able to benefit from the innovations brought by technology without concern. Thus, distance education, enriched by artificial intelligence, will move one step closer to its goal of providing quality and equitable education to all segments of society.