CONTRIBUTED BY
Karolina, ExpertHub Team
DATE
Apr 16, 2025
Here we are, hopping on the GenAI (Generative Artificial Intelligence) trend sweeping social media these days. But the image of the imagined ICT Expert made us reflect on the pitfalls of using this technology and the avenues we can take to mitigate them.
GenAI: Fun or none?
GenAI produces content that can appear human-made by analyzing and learning from large amounts of existing data, including text and images found online. Generating content with AI tools, like the popular “dolls”, entails consequences, though – for the social and environmental sustainability, privacy, security, law, and even culture.
Data centers created to maintain AI machinery began to impact the environment to a significant degree. Copyright concerns are constantly raised, too. On top of that, GenAI is powered by biased data. That is why our Expert Doll is a young, able-bodied, and white male, even though, through our prompt, we aimed to create simply a ‘human’. The representation problem is still persistent in the field, leading us to question the fairness in AI governance.
But we still can use GenAI ethically and wisely, by reflecting on and then aiming to adjust how we use it, build it, and finance it. And by trusting in the real-life (non-doll) Experts.
The need for expertise in operationalizing AI ethically and safely…
…is apparent in the multitude of new job positions. Larger enterprises implementing this technology not only have separate AI teams, but within those, there are groups specifically focused on AI Ethics or Responsible AI (RAI). AI Ethics is a growing, multidisciplinary area, and the job titles can vary a lot depending on the company and the focus of the team (like research, policy, product, fairness, or said governance).
1️⃣ AI Ethicist / AI Ethics Specialist: Focuses on ethical implications of AI use, algorithmic fairness, bias mitigation, and social impacts. Might conduct risk assessments, draft ethical guidelines, or advise on responsible AI design.
2️⃣ Responsible AI / Ethical AI Researcher: Focuses on academic-style research on fairness, explainability, privacy, and bias in AI systems. Typically has a background in AI/ML and philosophy, sociology, or law.
3️⃣ AI Policy & Governance Specialist: Focuses on developing AI governance frameworks, compliance with laws (like EU AI Act, etc.), and internal policies. Works on aligning AI systems with ethical and legal standards.
4️⃣ AI Fairness / Bias Analyst: Focuses on technical and non-technical audits of models for bias, fairness, and explainability. Often involved in creating bias detection pipelines or fairness dashboards.
5️⃣ AI Risk Manager / AI Risk & Compliance Officer: Focuses on identifying, monitoring, and managing risks associated with AI systems. Bridges legal, ethical, and technical considerations in AI deployment.
6️⃣ Human-Centered AI / UX Researcher: May focus on studying human interaction with AI systems to ensure they’re ethical, inclusive, and user-friendly. May run studies on how different groups are impacted by AI outcomes.
7️⃣ AI Product Ethicist / Responsible AI Product Manager: Focuses on ensuring AI products align with ethical guidelines through the product development lifecycle. Works between ethics, engineering, and product teams.
8️⃣ AI Ethics Program Manager: Focuses on overseeing AI ethics initiatives, workshops, training programs, and compliance tracking.
9️⃣ AI Explainability Engineer: Focuses on designing and implementing systems to make AI decisions transparent and understandable to stakeholders.
The list does not end here, but extends to positions like AI Ethics Legal Counsel, Ethical Data Scientist, Algorithmic Accountability Auditor, Responsible AI Evangelist, or even Trust & Safety Analyst.
Considering the multitude of different concerns you may face when building your AI project team, feel free to reach out to us.