CONTRIBUTED BY
Karolina, ExpertHub Team
DATE
Dec 2, 2025
University of Helsinki research groups developed a prominent series of courses in the past years, within the projects endeavouring to make AI upskilling accessible.
Elements of AI attracted nearly 2 million people learning globally about the basics of artificial intelligence.
The leading goal behind the project was to demystify this technology, show that it is not as scary as imagined by many, and explain it on the examples that are relatable to people outside of the technical environments as well. And to anyone who wants to learn. For free.
The course invites one on a journey of discovery, from understanding the rules driving the algorithms, through basic terminology, to real-life applications and implications. Written in a witty way, containing DIY exercises, and not only systematizing the knowledge but also prompting critical reflections. Within the course, there are multiple modules, but the two main parts are Introduction to AI and Building AI. So, after learning theory, you can jump right into practice.
Another one worth recommending is Ethics of AI – tackling a theme that, in the context of the rapid technological development, is a bottomless pit.
The discussion about ethical use of AI takes new turns with every update, and there is still no consensus on many issues. But the course prepared by the University in partnership with Helsinki Centre for Data Science, Ministry of Finance in Finland, Finnish Center for AI and Cities of Helsinki, Amsterdam and London on MOOC constitutes a very interesting framework to start with.
Our ExpertHub crew has taken part in both courses, not only to learn about AI but also to get to know how to talk about it skilfully and effectively, so that people listen and benefit from what you pass on to them.
We would like to share some interesting learnings from the areas that are more reflective and relate to the latter course with you.
AI Ethics are a whole separate field of studies, they are a subfield of Ethics of Technology, which itself is a subfield of Applied Ethics. Applied Ethics is a discipline that determines what a moral agent is obligated or permitted to do in a particular domain. But who is a moral agent?
A moral agent, in simple words, is an agent with a capacity to make an informed, autonomous decision. In the past, the field of Roboethics assumed robots as moral agents. AI Ethics departs from that assumption, holding the enablers of algorithms accountable, rather than algorithms themselves. This approach opens a discussion of automated and autonomous algorithmic decisions. Do they take place? When do they take place? To what extent are the AI systems autonomous?
Humans and machines become cognitive hybrids. That’s why the questions of responsibility, accountability, and agency come to the fore. Technology often reflects its developers. But, by definition, an AI actor is any actor involved in at least one stage of AI lifecycle. All actors are morally responsible to some degree, including not only the ones that construe the AI systems but also their end users.
In general, in ethics, the basic terms are values and norms. Norms can be prescriptive (encouraging) or proscriptive (discouraging). This framework allows you to understand the difference between ‘doing good’ actively and ‘avoiding doing bad’, a somewhat passive approach. When discussing AI Ethics, most of the conversations touch upon the principle of non-maleficence (how to use AI so it does not harm anyone), rather than the principle of beneficence (how to use AI to actively do morally good things).
“Tech solutionism” (term developed by Morozov) refers to a conviction that problems caused by technology can always be fixed by more technology. This is a false reasoning. Human professionals, their familiarity with contexts and ability to reason is needed in the debate about the ethical problems that using AI entails.
Among other main concerns of AI Ethics is how to implement this technology broadly, in an accessible manner, so that it serves the common good of humankind. Defining “common good” requires diversity of viewpoints, though and participation of relevant representation of the ‘human kind’. This brings us to the next point.
Humanity is falsely depicted as unanimous. We are surrounded by the narratives of the human aspect of AI, or human in the loop, etc. But we need to consider the humans are not a unanimous mass. There are also power relations between us. So, it would be better to say AI is between humans rather than that it exists next to humanity.
Principles discussed in Ethics, such as transparency or privacy are often morally agnostic. Transparency as well as privacy can both enhance and endanger safety, thus their level is important to ethical discussion, but they are not ethical concepts as such. They are rather a pair of ideals. Similarly with inclusion, and discrimination – AI can both produce any of those or impair them.
Human rights are fundamental to ethical discussions, also in the case of AI application. Just like talk about agency and autonomy of AI, human rights also revolve around these two themes. When the autonomy and agency of humans is impaired because of AI, the use of it becomes morally questionable.
Ethics of AI Course covers many more areas that are important prompts, not for AI but for human thinking. Biases, the ways they enter the system, defining robustness of AI systems, criticizing the ‘teethless’ regulations…
Perhaps, it is however good to end on a note that, as the course says, we have a moral obligation to construct products and services in a way that human rights can be protected even in unconventional circumstances.



