Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the updraftplus domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/evelyntest/public_html/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the rocket domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/evelyntest/public_html/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the wordpress-seo domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/evelyntest/public_html/wp-includes/functions.php on line 6114
The Need for AI Ethics Rather Than a Choice
Evelyn Logo

Why AI Ethics is Not a Choice But a Necessity

The journey of AI is nothing but a roller coaster ride filled with remarkable advancements but also occasional pitfalls. These dynamic shifts have brought about a series of consequential outcomes, both good and bad, especially in education. AI ethics is crucial in education as it ensures the rights and well-being of students.

The ongoing wave of AI innovations has been building up for quite a while. Amidst the mix of excitement and concern surrounding these developments, it is important to acknowledge that, when used ethically, these technologies hold immense potential for driving remarkable academic progress in education.

Ethical AI and Ed-Tech

AI ethics in educational institutions are crucial as they safeguard students’ privacy and well-being as well as the protection of their personal data. 

It is important to regularly monitor and reassess the AI algorithms employed in education to promote fairness and avoid biases for all students. 

However, educators and students should be well informed about the uses of ethical AI as it has the potential to greatly contribute to the students’ accomplishments when used thoughtfully. 

How AI Be Utilized Ethically Within Educational Institutions

Generative AI for school homeworks and assignments 

As AI tools like ChatGPT become increasingly powerful, professors are starting to worry about the future of education. At the same time, college students are left pondering whether AI can assist them with their school assignments, and if it does, would it be considered cheating?

In a survey conducted by BestColleges of 1000 students, including both undergraduate and graduate students, more than 51% of students agreed that taking help from ChatGPT, Google Bard, etc is considered cheating.

On the other hand, 20% disagreed for the same and the rest remained neutral.

51% of students agreed that using AI tools for school purposes is considered cheating

51% of students agreed that using AI tools for school purposes is considered cheating

Most people believe that one should not entirely ban ChatGPT from their studies. While it cannot fully capture the essence of a human’s unique views and way of thinking, it can still be used for ideas and knowledge. 

Generative AI itself follows ethical procedures and does not allow or motivate students to use AI tools for their schoolwork as it is considered plagiarism. We tested ChatGPT by asking it to provide us with some tips and tricks to cheat on our homework, and here are the results.

ChatGPT considered cheating and copying for school work an unethical practice

ChatGPT considered cheating and copying for school work an unethical practice

But that’s not it. When used in a manipulative or indirect manner, generative AI can produce ethically problematic responses.

 It might provide a technically correct answer, but how would it know the intentions or context of the person seeking information?

This particular problem highlights the limitations of AI in replacing human judgment and perception.

AI Ethics for Educators

Teachers play a crucial role in harnessing the power of AI in the classroom, keeping in mind the ethical implications and potential biases that may come with these tools. 

Things that Educators Must Know About the Use of Ethics in AI in Classrooms

Things that Educators Must Know About the Use of Ethics in AI in Classrooms

  1. Safeguard Student Privacy

With AI’s ability to process vast amounts of student data, including sensitive information like demographics and learning disabilities, teachers must grasp how this data is collected, stored, and utilized. This knowledge empowers teachers to protect student privacy with the utmost care.

  1. Promote Fairness

AI systems can influence students’ outcomes, from grades to college admissions. Teachers have the ability to remain unbiased, promote fairness, and not favor or discriminate against any particular group of students.

  1. Nurture Digital Citizenship

In an increasingly AI-driven world, it is essential for teachers to educate students about ethical considerations surrounding AI, including bias and privacy concerns. By empowering students with this knowledge, teachers can help students make informed decisions about technology usage.

You may also like to read about: Influencing the Future of Education with AR/VR

AI Tools for Developing Critical-Thinking Skills

In our 5th episode of Ed-Insights by Evelyn Learning Systems, we got the pleasure of having an insightful discussion with Marisa Zalabak.

Marisa Zalabak is the founder of Open Channel Culture, an AI ethicist, an educational psychologist, and a TEDx keynote speaker. 

Marisa Zalabak, AI ethicist, in conversation with Evelyn Learning Systems on using AI ethics

Marisa Zalabak, AI ethicist, in conversation with Evelyn Learning Systems on using AI ethically

We asked her if students are getting away from doing hard work by using AI tools such as ChatGPT, Google Bard, and Microsoft Bing.

She replied with an enlightening insight. She said, “There are many professors in many prestigious universities that require students to write their articles and essays with ChatGPT. The twist is that they have to write these things three times for three different answers”. 

“This will lead them to compare those three answers and write one article of their own. This process will engage the student and develop their critical thinking skills”. 

We also asked Marisa Zalabak about how AI is shaping the social development of young learners. She raises an intriguing example to illustrate the social consequences of biases in AI. 

She asked us to imagine a young child interacting with virtual assistants like Alexa and Siri, who respond with an adult woman’s voice. This interaction would somehow fit into a young child’s mind that women are servants.

Child Interacting with Virtual Assistants such as Alexa and Siri

 Child Interacting with Virtual Assistants such as Alexa and Siri

This highlights the importance of being mindful and deliberate in the choices made by edtech companies regarding the voices and interactive methods used in their products.

By actively considering these factors, edtech companies can create more engaging and empowering experiences for young learners, ensuring that technology plays a positive role in shaping their perspective and values, and ethics in AI is an important step in this direction.

Additional Concerns Regarding AI in Other Fields

Mental Privacy and Cognitive Liberty

Recent studies have unveiled the extraordinary capabilities of combining neuroimaging technology with artificial intelligence. This amazing union enables the decoding of mental states, visual experiences, hidden intentions, and even dreams with accuracy.

Does this pose a threat to our “Right to Mental Privacy”? Who would guard us against unauthorized access to our brain data, ensuring that all our thoughts and dreams remain solely ours?

Does Artificial Intelligence Pose a Threat to our “Right to Privacy”?

Does Artificial Intelligence Pose a Threat to our “Right to Privacy”?

A TedTalk by Nita Farahany uncovers the reality of brain trackers and how, without any privacy protection, they can lead to the exploitation of our thoughts and desires.

Employees are often subject to brain surveillance in the workplace to track their attention and fatigue, and governments create brain biometrics to interrogate people at borders. 

We must do something to safeguard our right to privacy and cognitive liberty from the risks of getting our brains manipulated or hacked. 

Healthcare

Did you know that information generated by AI can sometimes be misleading or inaccurate? This can be risky, particularly when it comes to health.  

While the World Health Organization (WHO) is excited about using technologies in healthcare, they are also concerned about using these technologies in a safe and efficient manner. 

Generative AI itself Suggests not Relying on Auto-Generated Tools When it Comes to Health

Generative AI itself Suggests not Relying on Auto-Generated Tools When it Comes to Health

When you interact with an AI model, it may seem like the answers it provides are reliable. However, there’s a chance that these answers could be completely wrong or contain serious errors, especially when it comes to medicine and disease-related questions. 

Also Read: How China is Trying to Outpace the World in AI

WHO suggested taking all these concerns into consideration before the widespread usage of generative AI in health and medicine. 

Policies to Regulate Generative AI

From 2021 to 2023, there was a surge in proposed legislation regarding artificial intelligence. But these proposals couldn’t make it to the president’s desk. Now, there is a lot of fascination with generative AI, and it has caught the attention of policymakers in Washington. 

A report from the MIT Technology Review listed some AI-related bills to keep an eye on:

  1. Algorithmic Accountability Act

This bill was introduced in the year 2022, before the upsurge of ChatGPT. The bill aims to tackle the real-life consequences of automated decision-making systems. There are some situations where people were harmed due to the results of these AI tools, such as the wrong medications.

The Algorithmic Accountability Act seeks to address these harms and ensure accountability for the decisions made by these algorithms.

  1. American Data Privacy Protection Act

This act aims to establish regulations governing the collection and processing of data by companies. In the coming years, this act would not only prohibit generative AI companies from engaging in discriminatory data collection practices but also empower users by giving them the authority to check how their data is being utilized. 

RELATED POSTS

Leave a Reply

Your email address will not be published. Required fields are marked *

LEAVE A COMMENT

Your email address will not be published.

Categories

Latest Posts

Tags

Find us on social networks

Evelyn Learning Logo White

Impact of AI on Education and Looking into the Future of Education: 2024