The word "intelligence" used to have a clear meaning. Now, with the rise of Artificial Intelligence (AI), things get all over the place.
Noam Chomsky in the much acclaimed article in the Newyork time and numerous podcasts and speaking engagements has defined AI as “glorified plagiarism machines…”
The 3 modern fathers of AI, Yann Le Cun, Geoffrey Hinton, Yoshua Bengo cannot also seem to agree among themselves what truly is the definition of the word intelligence.
Hinton and Bengo define and envision a future where AI systems, powered by extensive datasets, attain human-level intelligence by emulating the cognitive processes of the human brain. Their concerns about the ramifications of such advancements add an intriguing dimension to the discourse. There fears on the development in a way is quite an interesting part of the conversation .
Yann LeCun, the chief AI scientist for Facebook AI research, offers a thought-provoking perspective. He argues that current AI systems, despite their vast knowledge, lack crucial skills like reasoning and planning.LeCun advocates for a shift towards objective-driven architectures. These AI systems would prioritize understanding their environment, reasoning through problems, and acting with specific goals in mind. This approach, according to LeCun, is key to ensuring AI remains under human control and aligns with ethical principles. According to Yan AI is not even at dogs intelligence.
The ultimate goal of AI research for some is superintelligence. Is this a realistic possibility, given what we know about intelligence?
Shall we embark on a debate grounded in knowledge?
How do AI systems learn?
Do you recall skipping that math functions class? Or perhaps Statistics 1/2, a staple of the 8-4-4 Kenyan education system, which instilled in us the foundational knowledge to pursue any path in life? It's those fundamental principles that underpin AI systems.
There are those who wonder: when did mathematical functions become so central to our existence? Yet, here we stand, at a juncture where numbers and algorithms are shaping, or are on the verge of shaping, our daily interactions.
In many AI and computer science classes this statement often defines the world of technology:
Functions define the world
Lets put it simply: Imagine showing a child thousands of pictures of cats and dogs. The child learns to recognize the differences between the two animals.
An AI system is like that child, but instead of pictures, it sees massive amounts of data.
It then learns the important features of that data, like recognizing patterns in cat pictures or understanding the flow of a conversation.
Finally, the AI uses this knowledge to perform tasks, like identifying cats in new photos or answering your questions in a conversation.
However, this perspective is somewhat simplistic and overlooks the intricacies of generative AI, encompassing areas like image generation and natural language processing. Additionally, there's a wealth of groundbreaking research that has remained largely inaccessible to the general public over time.
Lets listen to abit of this
In partnership with Devin.ai, we've crafted codekijiji.ai, a local language Large Language Model (LLM). Our ongoing efforts aim to streamline translations and text-to-speech (TTS) capabilities for Kenyan languages, with an eye towards broader implementation across the African continent.
It might sound unconventional—using a tool to create another tool—but in the realm of technological advancement, such endeavors are not as outlandish as they may initially seem.
This leads us to the fundamental question: what defines true intelligence? While AI tools learn from other tools, humans also learn from their surroundings, including their environment and social interactions—a process often referred to as social learning. But is intelligence simply the ability to learn, or does it encompass an innate capacity for certain tasks?
In this regard, I find myself inclined to agree with Yann's perspective: advocating for a shift away from current AI models towards objective-driven architectures. He emphasizes the importance of understanding, reasoning, and goal-oriented behavior in future AI systems to ensure they remain within human control and adhere to ethical principles.
To me, intelligence means the capability to acquire and synthesize information into practical solutions for everyday challenges. However, while AI systems have made significant strides, many still rely on copying and regurgitating familiar patterns, rather than truly grasping novel concepts and in essence making them not so intelligent as we have been made to think.
So, are we at a stage where AI will embody true intelligence? The question of whether AI will ever truly embody intelligence is a complex one. Given the vast amounts of data being processed and the advancements in AI technology, it's conceivable that the traditional definition of intelligence may become less relevant. Instead, the focus may shift towards understanding the purpose behind building these systems.
Indeed, as technology continues to progress, there's a pressing need for global solutions to address a multitude of challenges, from climate-related disasters like unexplained floods globally to economic meltdowns that often don’t make sense and to other pressing issues. Modeling such phenomena enables us to prepare for the future and develop strategies to mitigate their impact.
In this context, the evolution of AI is not just about achieving a certain level of intelligence, but also about leveraging technology to tackle real-world problems and create a more sustainable and resilient future..
Codekijiiji.ai
The world of AI is becoming increasingly multilingual, and a fascinating project is underway – the development of a Large Language Model (LLM) specifically for Kenyan languages! This initiative, spearheaded by codekijiji.ai in collaboration with Devin.ai, aims to revolutionize communication and information access for Kenyans.
Why is a K-LLM Important?
Imagine a world where translation between Kenyan languages is seamless and text-to-speech technology accurately reflects the beauty and nuances of your native tongue. That's the potential of the K-LLM! Think of effortlessly understanding news articles, educational materials, and entertainment in your preferred language.
Currently, the project is focusing on Kikuyu, a widely spoken language in Kenya. This initial step paves the way for incorporating other Kenyan languages in the future. By ensuring accurate pronunciations like "Wĩ Mwega" and "Thiĩ Mwega Mũno" (instead of unintelligible outputs), the K-LLM aims to preserve the essence of these languages.
How Can You Help?
Developing a robust LLM requires a significant amount of data and ongoing refinement. Here are some ways you can contribute:
Spread the word! Raise awareness about the K-LLM project and its importance for Kenyan languages.
Contribute data (if possible): If you're a native speaker, consider providing samples of written and spoken Kikuyu (or other Kenyan languages as the project expands) to enrich the LLM's training data.
Engage with the project: Stay informed about the K-LLM's progress and find ways to participate in its development (if opportunities arise).
I was actually thinking about this morning.... I want to check if we have digitized versions of kikuyu texts but also visit the different archives where they can be found
Interesting project! Is there a corpus of Kikuyu speech or are you hoping to crowdsource it all from scratch?