How businesses can responsibly adopt generative AI


Generative AI is one of the most transformational technologies of modern times and has the potential to fundamentally change how we do business. From boosting productivity and innovation, to ushering in an era of augmented work where human skills are assisted by AI technology, the opportunities are limitless. But some risks accompany these opportunities. We’ve all heard stories about AI hallucinations presenting fictional data as facts, and warnings from experts about potential cybersecurity issues.
These stories emphasize the numerous ethical issues that companies must address to ensure that this powerful technology is used responsibly and benefits society. It can be a challenge to fully understand the workings of AI systems. Addressing these issues and building trusted and ethical AI has never been more important. To ensure responsible adoption of the technology, businesses need to embed both ethical and security considerations at every stage of the journey – from the point of identifying potential AI use-cases and their impact on the organization, to the actual development and adoption of AI.
UK Chief Technology & Innovation Officer at Capgemini UK.
Responding to AI risks with caution
Many organizations are adopting a cautious approach when it comes to AI adoption. Our recent research revealed that despite 96% of business leaders considering generative AI as a hot boardroom topic, a sizeable proportion of businesses (39%) were taking a “wait-and-watch” approach. This is not surprising, given that the technology is still in its infancy.
But leveraging AI also enables a strong competitive advantage, so first movers in this space have a lot to gain if they do it right. The responsible adoption of generative AI begins with understanding and tackling the associated risks. Issues like bias, fairness, and transparency need to be considered from the very beginning, when use cases are being explored. Once a thorough risk assessment is performed, organizations need to devise clear strategies for mitigating the identified risks.
For instance, implementing safeguards, making sure the governance framework to oversee AI operations is in place, and addressing any issues related to intellectual property rights. Generative AI models can produce unexpected and unintended outputs, so continuous monitoring, evaluation, and feedback loops are key to stopping hallucinations that could cause harm or damage to individuals or organizations.
AI is only as good as the data that powers it
With Large Language Model (LLM) there is always the risk that biased or inaccurate data compromises the quality of the output, creating ethical risks. To tackle this, businesses should establish robust validation mechanisms to cross-check AI outputs against reliable data sources. Implementing a layered approach where AI outputs are reviewed and verified by human experts can add a further layer of security and prevent the circulation of false or biased information.
Ensuring that private company data remains secure is another critical challenge. Establishing guardrails to prevent unauthorized access to sensitive data or data leakage are essential. Companies should employ encryption, access controls, and regular security audits to safeguard sensitive information. By establishing guardrails and orchestration layers, AI models will operate within safe and ethical boundaries. Additionally, using synthetic data (artificially generated data that mimics real data) can help maintain data privacy while enabling AI model training.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Transparency is key to understanding AI
Since the inception of generative AI, one of the biggest challenges to its safe adoption has been the lack of wider understanding that LLMs are pre-trained on vast amounts of data, and the potential for human bias as part of this training. Transparency over how these models make decisions is vital to building trust among users and stakeholders.
There needs to be clear communication about how LLMs work, the data they use, and the decisions they make. Businesses should document their AI processes and provide stakeholders with understandable explanations of AI operations and decisions. This transparency not only fosters trust but also allows for accountability and continuous improvement.
Additionally, establishing a trust layer around AI models is crucial. This layer involves continuous monitoring for potential anomalies in AI behaviors and ensuring that AI tools are tested in advance and used securely. By doing so, companies can maintain the integrity and reliability of AI outputs, building trust among users and stakeholders.
Finally, developing industry-wide standards for AI use through collaboration among stakeholders can ensure responsible AI deployment. These standards should encompass ethical guidelines, best practices for model training and deployment, and protocols for handling AI-related issues. Such collaboration can lead to a more unified and effective approach to managing AI’s societal impact.
The future of responsible AI
The potential of AI cannot be overstated. It allows us to solve complex business problems, predict scenarios and analyze huge volumes of information that can give us a better understanding of the world around us, speed up innovation, and aid scientific discovery. However, as with any emerging technology, we are still on the learning curve and lacking regulation. Proper care and consideration, therefore, needs to be taken with its deployment.
Going forward, it is imperative that businesses have a clear strategy for the safe adoption of generative AI, which involves embedding guardrails at every stage of the process and continuous monitoring of the risks. Only then can organizations fully realize its benefits, while mitigating against its potential pitfalls.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Generative AI is one of the most transformational technologies of modern times and has the potential to fundamentally change how we do business. From boosting productivity and innovation, to ushering in an era of augmented work where human skills are assisted by AI technology, the opportunities are limitless. But some…
Recent Posts
- How Claude’s 3.7’s new ‘extended’ thinking compares to ChatGPT o1’s reasoning
- ‘We’re nowhere near done with Framework Laptop 16’ says Framework CEO
- Razer’s new Blade 18 offers Nvidia RTX 50-series GPUs and a dual mode display
- I tried adding audio to videos in Dream Machine, and Sora’s silence sounds deafening in comparison
- Sandisk quietly introduced an 8TB version of its popular portable SSD, and I just hope they solved its previous big data corruption issue
Archives
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- December 2019
- November 2019
- September 2018
- October 2017
- December 2011
- August 2010