< Back
In an era where technology vendors often market AI solutions that fail to deliver, and ethicists get lost in theoretical debates, a critical question arises: Are these efforts solving real-world problems or merely indulging in intellectual exercises? The true value and responsibility of technology, especially artificial intelligence, will not be realised in labs or courtrooms. Unfortunately, many leading AI experts have been preoccupied with creating impressive demos that lack practical application, or with developing consumer internet applications that exploit personal data for profit, while criticising the use of similar technologies in meaningful projects that could enhance societal safety and security.
Unfortunately, the current landscape paints a concerning picture. On one hand, technology vendors often oversell their capabilities, marketing fantastical AI solutions that fail to deliver on their promises. This "AI snake oil" leads to disillusionment and hinders the progress of the field as a whole.
On the other hand, many AI ethicists become consumed by theoretical debates, focusing on abstract principles detached from the realities of implementation. While ethical considerations are crucial for responsible development, discussions often fail to offer practical guidance for building and using AI in real-world settings.
This disconnect between theoretical discussions and practical application represents a significant roadblock to realising the true value of AI. To ensure responsible and effective development, we need to shift our focus to building AI solutions that tackle concrete challenges and integrate ethical considerations from the ground up.
The term "artificial intelligence" (AI) has become a buzzword, encompassing a vast and often ambiguous range of technologies. What was once described as "big data" or "predictive analytics" can now be readily rebranded as AI, blurring the lines between distinct concepts. Additionally, confusion arises from the tendency to conflate AI with automation.
This lack of clear definition creates a fertile ground for misleading claims. The promise of transformative AI advancements often falls short of reality, leading to disillusionment and concerns about ethical implications like algorithmic bias, accountability, and transparency. As a result, the field of AI finds itself in a critical state, struggling to live up to the inflated expectations.
Moving forward, a more critical and nuanced understanding of AI is essential. We must deconstruct the "AI" label and recognize the diverse range of technologies it encompasses. Only then can we move beyond the hype and focus on ethically and responsibly developing and deploying these technologies to address real-world challenges.
The discourse surrounding AI ethics has become an unfortunate echo chamber of lofty principles and abstract discussions, often offering little practical guidance for real-world application. This "ethics as theory" approach, prevalent in many AI ethics statements, resembles a check-the-box exercise, failing to address the complex ethical challenges faced by users and operators.
The sheer volume of these principles has fueled an industry dedicated to analysing them, further highlighting their inadequacy in addressing practical issues. This has sparked concerns about a legitimacy crisis surrounding AI ethics and begs critical questions:
-> What makes AI unique enough to warrant distinct ethical treatment compared to other technologies?
-> Are there more fundamental concerns to address before formalising AI ethics frameworks?
-> How can abstract principles translate into meaningful action and avoid being mere theoretical musings?
-> While frameworks addressing algorithmic bias, accountability, and explainability are crucial, they represent a narrower focus. We argue against tunnel vision, urging the consideration of the entire system, not just isolated algorithms.
Our approach to technology ethics recognises that software platforms exist within a broader context, inextricably tied to their application, operational use, and surrounding data environment. This holistic perspective goes beyond the specific AI component and acknowledges the ethical implications of the entire system.
The reality of AI effectiveness often falls short of the hyped promises. Many success stories, upon closer examination, reveal exaggeration or fabrication. For instance, AI designed for monetizing internet engagement has prioritised profits over societal consequences, fostering social media echo chambers and contributing to political division. While not all AI applications are harmful, second- and third-order effects must be considered.
If we start to take artificial intelligence more as tools for human use, we become better equipped to situate AI in appropriate framing contexts that recognise its critical features, constraints, liabilities, and dependencies:
We view AI models not as independent entities capable of miraculous solutions, but as tools within a larger system. Their capabilities are dependent on the supporting infrastructure, such as data quality, computational resources, and surrounding workflows. Additionally, AI models are vulnerable to errors if not carefully managed, maintained, and monitored.
Our notion of operational AI moves beyond the realm of theoretical ideas and academic exercises. We prioritise the integration of AI models into real-world scenarios, taking the full context of their deployment into account. This includes factors such as:
-> Model inputs: Examining the quality and potential biases within the data used to train and operate the model.
-> Users: Understanding the needs, skills, and limitations of those interacting with the AI system.
-> Model outputs: Analysing the potential ramifications and unintended consequences of the AI's outputs.
-> Consequences: Evaluating the real-world impact of the AI system on individuals, groups, and society as a whole.
We acknowledge the limitations of relying solely on fairness metrics and simplistic notions of data bias. Fairness is context-dependent, and a metric that may appear fair in one scenario might not translate effectively to another. Our approach focuses on building models that are fair and unbiased within the specific context of their intended use. This involves:
-> Understanding the cultural, historical, and institutional background against which fairness needs to be evaluated.
-> Recognising that all data exhibits some form of bias, the crucial question becomes which biases are acceptable or even necessary for the model to function correctly in its specific context.
We advocate for a holistic approach to data and model management, providing tools for continuous testing, evaluation, and improvement. This includes:
-> Tracking the full provenance and lineage of data used in the model throughout its lifecycle.
-> Structuring modelling efforts around a sensible relationship ontology that translates raw data elements into meaningful concepts based on the specific context.
-> Implementing version control for changes made to data, models, parameters, and other elements of the system.
-> Monitoring how dynamic environmental factors can affect usage and outcomes, ensuring ongoing model performance and reliability.
-> Conducting continuous testing and evaluation, including data quality checks and integrity assessments, to mitigate the inevitable impacts of entropy and brittleness in AI models over time.
-> Creating a persistent and reliable audit trail for all data processing steps to facilitate future analysis, troubleshooting, oversight, and accountability.
We emphasise the critical and ongoing need for model and system maintenance. Unlike "fire-and-forget" solutions, AI systems require consistent care and attention to maintain their effectiveness. This includes:
-> Regularly reviewing and updating models to adapt to changing environments and evolving requirements.
-> Monitoring the performance of models and addressing any deterioration in accuracy or reliability.
-> Implementing robust error handling and feedback mechanisms to ensure that model outputs and potential failures are communicated transparently to users.
We view user interaction with AI outputs as a central feature of the entire system operation, not merely an afterthought. This translates into user-oriented interface considerations that:
-> Provide clear and contextual information about the model's outputs, including confidence measures and limitations.
-> Offer support mechanisms that augment and guide informed human decision-making based on AI insights.
Human-Oriented Applications: Our Collaborative Intelligence Initiative
-> Prioritising the design of AI systems that complement human expertise and judgement rather than replacing them entirely.
-> Recognising the critical role of human oversight and accountability in the deployment and use of AI systems.
-> Actively engaging with stakeholders and seeking diverse perspectives to ensure that AI is deployed in a socially responsible and equitable manner.
We believe in fostering transparency and honesty regarding the trade-offs, limitations, and potential failures inherent in AI systems. This includes:
-> Openly communicating the limitations of AI models and the potential for errors or biases in their outputs.
-> Providing clear explanations of how AI models are developed and how they reach their conclusions.
-> Acknowledging the ethical considerations surrounding AI deployment and engaging in open dialogue with stakeholders about the potential risks and benefits.
By adhering to these principles, we believe our pragmatic approach to AI can lead to the development of reliable, durable, and effective AI tools. These tools are not simply academic musings or fleeting trends, but real-world solutions that address critical challenges across diverse sectors.
Our collaborative work with various clients exemplifies this impactful application of AI. We work closely with them to:
-> Understand the specific complexities of their domains and the unique challenges they face.
-> Grapple with the legal, policy, and ethical considerations surrounding their desired AI solutions.
-> Co-create AI systems that address those complexities on their own terms, ensuring they are contextually appropriate, ethically sound, and operational in real-world settings.
-> This approach transcends the realm of performative pronouncements about the potential of AI. It focuses on the practical aspects of building and deploying functional and impactful AI solutions that truly serve the needs of the world around them.
Turium's pragmatic and ethical approach to AI seeks to move beyond the hype and contribute to the responsible development and deployment of this powerful technology. By fostering collaboration, transparency, and responsible application, we can harness the potential of AI to create a better future for humanity.