• UCL EDUCATE

Instagram, AI, and the $90,000 Ethics Question

Imagine if a stranger took a picture of your child in the street, and then used an app to find out their – and your - personal details. Your full name. Your date of birth. Your home address.



Wherever this sits within the realms of your imagination, the reality is that this type of technology already exists and, is already being used by the police and intelligence agencies in the United States.


A company called Clearview AI has developed the app and is using millions of photos it has scraped from social media sites such as Facebook and YouTube to match up images. By doing so it can also find out names, and other personal details.


Controversies around the rights of individuals to privacy of images on social media sites are not new. In 2014, artist Richard Prince sold images of individuals that he had curated from Instagram for as much as $90,000. Each. These were not images that he had taken – he simply scoured Instagram for interesting pictures of individuals, left comments on their feed, and then printed them as part of his exhibition of “New Portraits”.


Whilst the implementation of GDPR in 2018 clarified that photos are to be treated as sensitive data, the letter of the law alone does not tell us how people are to exercise their rights to confidentiality. Neither does it offer us guidance as to how we are to understand confidentiality of items that can readily be found in the public domain, or what we do in the face of new technologies that can blur the lines of our privacy rights even further.


Let’s take as an example, the company generated.photos. The company advertises itself as a solution for companies wishing to “Enhance your creative works with photos generated completely by AI”. Using AI, the company is able to generate “Unique, worry-free model photos” by creating images of people who don’t actually exist (to their knowledge), using the composite characteristics of real people who do. How the reference characteristics have been curated is unclear, but would GDPR allow an individual to refuse their characteristics from being used for a service like this? If so, how would this be policed?


These questions become even more important when we look at the application of this type of technology within an educational setting with children and young people. Despite the significant and immediate implications on the safeguarding responsibilities for schools, this has not prevented education systems globally from experimenting with facial recognition.


In China, for example, facial recognition is being used in schools to gauge pupils’ responses in class. Yawning or looking bored, for example, should indicate to a teacher that the lesson needs to be more interesting and engaging. Recently, the authorities have had a change of heart and revealed that they plan to “curb and regulate”the use of facial recognition tools amid concerns over privacy.


In the United States, it has been used as a security measure to detect expelled students under surveillance by the police and to stop them entering school or attending events they are barred from. But it has also raised questions about the amount of information being gathered about young people, their activities and associations, as well as the accuracy of facial recognition on darker skin tones. Similarly, in India, facial recognition is monitoring student behaviour after a student was found murdered in a toilet. The authorities believe it will also improve transparency and accountability as parents will be able to watch their children in lessons in real time.


In Australia, meanwhile, where facial recognition is being trialled to record absenteeism among students, commentators are debating the ethics of using the technology without the consent of young people. They are asking: should we be using facial recognition just because it is available?


Professor Rose Luckin, founding director of UCL EDUCATE and professor of learner-centred design at UCL Knowledge Lab, believes the negatives need to be weighed against the benefits – and that people need to be better informed about AI and the implications of its uses.


She said: “Commentators are quite right to focus on the negative aspects of AI as these can detrimentally affect people’s lives in terms of their personal security and in the case of children, safeguarding.


“We need to open up the debate the potential of AI when weighted against the negatives.


“For example, a teacher who is working with hundreds of children remotely every week, would be able to identify each child using this sort of technology, enabling a more personalised relationship, and better engagement between the student and teacher.


“All of us who work with technologies such as AI need to help people understand what it is, what it can and cannot do, so that they are not surprised when they discover the possibilities. This will help them to make informed decisions about the personal information they share publicly, including images.”


The tech sector as a whole is paying increasing attention to the question of ethics in AI. In 2018, Google published its own AI principles “to guide ethical development and use of the technology”. Since then, the company has also developed a series of tools which put the principles into action, such as testing AI decisions for fairness.


Closer to home, Professor Luckin co-founded the Institute for Ethical AI in Education (IEAIED), whose work focuses on developing frameworks and mechanisms to help ensure that the use of AI across education is designed and deployed ethically. On February 25 the Institute will release its interim report which will examine the ethical issues around AI as well as offering a shared vision for ethical conduct in the sector.


The IEAIED will, undoubtedly, identify the many advantages, and pitfalls, of AI in solving educational barriers. But the ultimate question will remain – just because AI means we can do something, does it mean that we should?


20 views0 comments