Defining fairness: How IBM is tackling AI governance


Enterprises are hesitant to adopt AI solutions due to the difficulty in balancing the cost of governance with the behaviours of large language models (LLM), such as hallucinations, data privacy violations, and the potential for the models to output harmful content.
One of the most difficult challenges facing the adoption of LLM is in specifying to the model what a harmful answer is, but IBM believes it can help improve the situation for firms everywhere.
Speaking at an event in Zurich, Elizabeth Daly, STSM, Research Manager, Interactive AI Group of IBM Research Europe, highlighted that the company is looking to develop AI that developers can trust, noting, “It’s easy to measure and quantify clicks, it’s not so easy to measure and quantify what is harmful content.”
Detect, Control, Audit
Generic governance policies are not enough to control LLMs, therefore IBM is looking to develop LLMS to use the law, corporate standards and the internal governance of each individual enterprise as a control mechanism – allowing governance to go beyond corporate standards and incorporate the individual ethics and social norms of the country, region or industry it is used in.
These documents can provide context to a LLM, and can be used to ‘reward’ an LLM for remaining relevant to its current task. This allows an innovative level of fine tuning in determining when AI is outputting harmful content that may violate the social norms of a region, and can even allow an AI to detect if it’s own outputs could be identified as harmful.
Moreover, IBM has been meticulous in developing its LLMs on data that is trustworthy, and detects, controls and audits for potential biases at each level, and has implemented detection mechanisms at each stage of the pipeline. This is in stark contrast to off-the-shelf foundation models which are typically trained on biassed data and even if this data is later removed, the biases can still resurface.
The proposed EU AI Act will link the governance of AI with the intentions of its users, and IBM states that usage is a fundamental part of how it will govern its model, as some users may use it’s AI for summarization tasks, and others may use it for classification tasks. Daly states that usage is therefore a “first class citizen” in IBM’s model of governance.
More from TechRadar Pro
Enterprises are hesitant to adopt AI solutions due to the difficulty in balancing the cost of governance with the behaviours of large language models (LLM), such as hallucinations, data privacy violations, and the potential for the models to output harmful content. One of the most difficult challenges facing the adoption…
Recent Posts
Archives
- March 2025
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- December 2019
- November 2019
- September 2018
- October 2017
- December 2011
- August 2010