Richard Hamming, a famous mathematician and computer scientist, always asked an important question: “What are the most important problems in your field, and why aren’t you working on them?” At the Distributed IoT-Based Platforms, Privacy, and Edge-Intelligence Research (DIPPER) Lab, we ask ourselves the same thing. For us, the biggest issue is the risks and dangers that have come with the rapid rise of artificial intelligence (AI) and we are working on them.
AI has grown incredibly fast and is being used everywhere. Imagine this: AI can now help doctors figure out illnesses by analyzing scans, speed up drug creation, and even predict new proteins that could have taken scientists years to discover. AI can also predict and create new materials that don’t even exist yet, for use in different industries. It’s amazing how AI can solve such complicated problems.
Even though AI can help us achieve great things and solve some of the biggest problems facing humanity, it also has the potential to cause harm, facilitate terrible and illegal acts, or even become a problem itself. The rise of AI is just the beginning, these systems will only get better and more powerful. What we’re seeing now is the least advanced these systems will ever be. With the ongoing race to create Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI), it’s possible we may one day face AI that surpasses human intelligence. Top scientists, including the “Godfather of AI,” Geoffrey Hinton, have raised alarms about this future. But while those issues may lie in the future, we are already grappling with many challenges caused by the rise of AI today.
AI, even in its simplest form, can be used to make predictions and guide decision-making based on data. For example, some banks often use AI to decide whether to approve loans by analyzing a person’s financial history. While this might seem straightforward, it highlights the power of even basic AI systems in influencing critical decisions. However, if the data used is biased or incomplete, it can lead to unfair outcomes, such as denying opportunities to deserving individuals. This example reflects just how impactful these systems can be, even in their simplest forms. Ghanaian institutions are likely to adopt such AI-driven decision-making systems in the future since they are cost-efficient and fast. We at DIPPER Lab are working to provide tools that can detect biases in AI models, ensuring that they produce fair and equitable outcomes. Additionally, we are developing explainable AI systems, which make it clear how decisions are made, so institutions and governments can use these models responsibly and transparently to benefit all citizens.
We have also seen rapid progress in AI systems capable of generating pictures and videos, creating increasingly realistic images and videos. Now, you can give an AI system a picture or video and ask it to modify, enhance, or even completely change it, and it does so with remarkable accuracy. AI can now replace people in videos with others so seamlessly that, to an untrained eye, it may take considerable effort to notice. As impressive as this might sound, it can also be dangerous. For instance, people are already using AI to generate pornographic images and videos of others without their consent. Additionally, evidence in videos could be tampered with, making it harder to trust what we see. These developments introduce new ways of tampering with reality and framing others for acts they never committed.
At DIPPER Lab, we are working on tools to accurately detect AI-generated images or videos, as well as content that has been tampered with using AI. We believe this process shouldn’t be left to intuition or experts alone—everyone should have access to simple, reliable tools to verify authenticity. Furthermore, we are exploring ways to digitally secure images and videos using blockchain technology, ensuring that your content cannot be manipulated or misused in this manner
Sound generation has not been left out of AI's rapid progress, and its potential for misuse is massive. At least with videos and pictures, people can often be skeptical or look for clues to determine authenticity, but with sound, the progress is truly alarming. AI can now clone voices with incredible accuracy, making it possible to make someone’s voice say anything. This technology is already being used for scams on a large scale. Imagine someone gaining access to a child’s phone number, calling them with what sounds like their mother’s voice, and telling them to leave the door unlocked because they are coming home or asking them to send money. Worse, they could lure the child outside, potentially leading to kidnapping or other terrible outcomes. At DIPPER Lab, we are also working on tools to detect AI-generated voices and voice mimicry. We believe this is critically important because the line between real and fake has become so blurred. AI systems have been trained to replicate human voices so well that you might not even realize the voice isn’t real. Ensuring that everyone has access to tools to verify the authenticity of a voice is a priority, as this kind of technology has the potential to cause real harm if left unchecked. We are committed to finding ways to counter this threat and protect people from its dangers.
Large Language Models (LLMs) bring with them a unique and terrifying risk: access to vast amounts of knowledge. These AI systems are trained on enormous datasets and have an incredible ability to generate information on almost any topic. We are fortunate that popular LLMs like ChatGPT are well-protected and secured, with measures in place to prevent the generation of harmful content. These models are designed to assist with tasks like education, coding, and research, and have been carefully tuned to avoid misuse. However, the growing availability of open-source tools and techniques means that individuals or organizations can now build their LLMs without these safeguards. This poses a significant risk, as these unregulated models could be weaponized to create dangerous instructions, generate harmful propaganda, or even orchestrate sophisticated cyberattacks.
AI systems are gradually being adopted in critical areas, from pacemakers and healthcare decision-making to fully autonomous driving on roads. They are also being deployed in agriculture, for tasks like detecting crop diseases, and even in home security systems. While these applications hold tremendous promise, they also raise an essential question: Are these AI systems robust, and more importantly, are they adversarially robust?
At DIPPER Lab, we are deeply invested in exploring these safety concerns. We are currently conducting rigorous tests on state-of-the-art AI systems, including reasoning models that outperform humans in tasks. Our goal is to ensure that these systems when deployed in real-world scenarios, can withstand challenges and manipulations. For example, we care about whether the AI in your home security system can be deceived or manipulated, or whether IoT systems in agriculture can be exploited to disrupt farming activities. Adversarial robustness: the ability of AI systems to function reliably even when under attack or faced with unexpected inputs—is critical to ensuring their safety and effectiveness.
Imagine a self-driving car that isn’t robust: a small manipulation or error could lead to catastrophic consequences. Now imagine the opposite—a world where self-driving cars are so robust that car accidents become a thing of the past. Before achieving such a future, we need to ensure that AI systems are capable and reliable, even when faced with malicious interference or unpredictable scenarios.
This is not to paint AI as something to fear or stop, far from it. Nobody wants AI to succeed more than we do because we believe in its incredible potential to revolutionize Ghana and the world.
Imagine AI transforming education in Ghana, providing personalized learning experiences for students in even the most remote areas, or revolutionizing healthcare by diagnosing diseases early and helping to find cures for conditions that have long plagued us. AI could strengthen our policing and security systems, helping prevent crime and keep our communities safe while supporting fair and transparent law enforcement practices. In transportation, AI can eliminate accidents with autonomous vehicles that are smarter and safer than any human driver.
AI could also solve complex scientific mysteries, unlocking answers to problems we’ve struggled with for decades. And as we face challenges from malicious uses of AI, we will need good AI to combat bad AI, from detecting deepfakes to stopping cyberattacks and misinformation campaigns. It’s not just about solving today’s problems—it’s about creating a future where AI works alongside us to improve every aspect of our lives.
At DIPPER Lab, this is the future we are working toward. We are committed to advancing AI responsibly, ensuring it is safe, reliable, and fair, so that its transformative potential benefits everyone. The progress AI promises is not just worth pursuing—it is essential for building a better Ghana and a better world.