Power Structures, AI and Society: Discussing AI and Big Data under an Ethical Perspective with Paul Schütze

by | Apr 18, 2024 | Editorial Blog

How can a Cognitive Science Degree shape your Future? Stories about career prospects from successful alumni of the Institute.

In today’s post, we continue our series dedicated to the Cognitive Science alumni of the University of Osnabrück, by interviewing Paul Schütze, a research assistant and PhD candidate in the Ethics and Critical Theories Research Group at the Institute of Cognitive Science. Paul completed both his bachelor’s and master’s degrees in Cognitive Science at Osnabrück University, finishing his master’s with a specialization in Philosophy of Cognitive Science and Neuroinformatics and Robotics in 2021.

Paul’s research interests mainly lie in the organization and functioning of digital capitalism and how it is related to the climate crisis we are currently facing. More specifically, his current research project focuses on the power and subjectivation dynamics characterizing the age of Big Data and AI. 

In a video published in our Journal, Anna Luise Backsen, a Cognitive Science student in Osnabrück, gives a broad and detailed overview of these topics, discussing some of the main ethical implications and risks connected to the use of AI in the larger framework of the impact of big tech companies on our economy and society. To establish a connection between Anna’s work and the research conducted by the Ethics of AI Group, we asked Paul to watch the video and give us his opinion on the topics presented, along with his personal experience and advice for students who aim at working in this research field in their future careers.

Can you introduce yourself and tell us what you are doing now? 

I am Paul and I am a researcher and PhD candidate in the Ethics and Critical Theories Research Group at the University of Osnabrück. So, I work in the humanities, and particularly at the intersection of AI Ethics, critical social theory, Science and Technology Studies and philosophy.

In my doctoral thesis I work on the connection between AI technologies and the climate crisis. Specifically, I am interested in understanding how AI technologies shape societies and their responses to climate issues. This means that in my work I look beyond the often-discussed environmental impact and the resource use of AI systems such as ChatGPT. But, I am rather interested in the social, economic and cultural processes, as well as the power dynamics that are attached to the development and use of AI applications. And so, I ask what impact these wider processes and dynamics have on the climate crisis.

What student work have you done at university that led you to your current place?

During my studies I worked as a student assistant for different seminars and professors. Due to my interests, these were mainly philosophy courses at Cognitive Science Institute. This work helped me a lot to get a better understanding of what I would enjoy working on. And so, I believe this was important in developing the idea that I might enjoy doing a PhD.
  Also, I took part in many courses that I did not need for my studies. This helped me a lot to develop a better idea of my own interests. For instance, during my year abroad in the U.S., I really enjoyed doing an intensive machine learning course. Yet, I also realized that the more theoretical work in the social sciences and the critical views that come with this are very important to me.

In addition, I also participated in some of the committees at the institute as well as some student initiatives at the university. I felt like this gave me a good understanding of how some of the processes within the university work.   

What are the essential skills that you acquired during your studies in Cognitive Science that turned out to be helpful during your career/current job?

I think the broad knowledge in diverse fields is very helpful in being able to understand and bring different perspectives together. This goes along with organizational skills that are required to do so many different things at the same time. Doing the weekly programming homework alongside math exercises and writing weekly philosophy essays resulted in the ability to manage one’s time rather well. I feel like I still benefit from this.

In addition to this, I think learning about these different perspectives, ways of thinking and doing science, provides you with a rather unique view on scientific and other questions. This interdisciplinary and novel perspective can be very valuable at times. 

What was your favorite course/seminar? 

If I remember correctly these were a couple of courses by Imke von Maur, where she really made us reconsider the political stakes of what it means to do “science”, and where I came across very cool critical work from the social and political sciences that impacted me a lot.

Also, I took this one course at the philosophy institute with Christian Lavagno, who gave a great introduction to critical theory and Marxist thought. I still use some of the insights I gained there in my work now. 

What would you do differently in your studies? 

I would give myself a bit more time to take more courses also at different institutes at the university. And I would probably further engage in political activism outside the pure studies.

I would like to ask you to watch this video by Anna Luise Backsen, a Cognitive Science student at Osnabrück, and published in our Journal. In this work, the author tackles ethical implications and potential risks of Artificial Intelligence and big data by first giving a broad overview of the functioning of economics of data and advertisement markets, and then shedding light on some of the crucial consequences of big tech products. Given your research interests, we thought it would be interesting to connect Anna’s project to your work at the Ethics of AI research group by asking some questions about your thoughts on the topics discussed based on your experience.

Wow. I really enjoyed watching the video. It is a great and easy to follow summary of many of AI’s current problems.

The scenario depicted in this video is very worrying and draws our attention to many concrete risks associated with online platforms that are gradually taking over more and more aspects of our daily lives. In the context of your research, can you give an example of how these products have greatly influenced our society in terms of power, inequality and public understanding?

One example which shows the power structures embedded in AI is Google’s latest AI system called “Gemini”. Its function is similar to ChatGPT, but the novel thing seems to be that it is multi-modal. This means the system supposedly can take in and work with not only text, but also, for instance, with speech and pictures.

Now, the interesting thing to me is not so much the new range of applications of this system. But if you watch Google’s video promoting “Gemini”, a few things stick out. For instance, it quickly becomes apparent what kind of society the people at Google imagine in the way they present their new model. It is a society that is defined by technological progress, a reality in which AI is literally needed to create a better future. Here, AI is portrayed as a necessary tool to solve humanities problems. However, these ideas of a technological future are built on very concrete and rather excluding pre-suppositions. Not everyone might want to use AI in such a way. It might not even seem desirable to everyone to live in a society built around technological progress. Thus, within this AI system we can find that certain ideals of progress and human well-being or development manifest themselves. Progress apparently means to develop a multi-modal AI system, rather than reaching net-zero emissions.

And while the AI system is advertised as being helpful for everybody around the globe, it is clear that mostly the people in the richer Global North will profit from it. This is not only because it is mostly in these societies that AI applications are useful at all, but because the company and its employees developing this system (Google), is largely situated in the US or Europe.

Of course, there are many more critical observations that follow from this. However, this provides a glimpse of how power structures and dominant interests shape the development of AI technologies and how this perpetuates and even creates new forms of discrimination and inequality.

How can we take actions to be in control and make a more aware use of these products?

For me there are at least two avenues here. The first would be political regulation. This is also what is mostly discussed in public. Think for instance of the EU data regulations or the EU AI Act which has gained more attention recently. While in my opinion these policies are too weak and do not adequately address the problems we highlighted above, they nonetheless are a first step in the right direction. We might very well imagine and strive for stronger regulations in the future.

The second avenue would be more of a bottom-up process, people coming together and organizing against big tech. This has for example happened in the US with the Amazon Union or in Berlin where workers of the gig economy (for the delivery company Gorillas) have protested for better working conditions. This shows that uniting against the big tech companies is possible. Here, it is also important to bring different struggles together. For instance, organizations typically focused on the climate crisis might join forces with workers’ rights groups as well as groups focused on tech related issues. Importantly, tech companies and the AI industry specifically need to be addressed in this regard. Thus, we might need more of a focus on AI related issues in these joined struggles.

What would you recommend to a student pursuing to embark on a career in the ethics of AI and data studies, based on your personal experience? 

I would recommend to visit as many classes as possible of the ones you are interested in and which you can find on this topic. Try to get an overview of what this field is all about and where your interests might lie. Ethics of AI can imply many different things, and you might need to find out what it means for you. Personally, I think Ethics of AI needs more critical theory and more fundamental questions, rather than producing ethics guidelines. Of course, you do not have to be social sciences focused only. But you might also go into more technical directions, for instance, regarding feminist data science. But be careful of “ethicswashing” and do not fall for shiny promises of using AI for good. Really try to question the technological choices and develop a critical stance towards these applications. That being said, I do not think there is one right or wrong way into this.

Gaia Mizzon

Gaia Mizzon

“I am a Cognitive Science Master’s student passionate about philosophy, neuroscience, and scientific communication. Being a member of the Website Team of the Cognitive Science Student Journal, I am eager to contribute to this initiative by giving voice to our community through interviews and articles; that is the main goal behind the Editorial Blog.
I am so thrilled to see how this project will develop and grow, thanks to the vibrant community that we are lucky to be part of.”

You could also be interested in…