eMusings
Your eyes and ears on the worlds of art, culture, technology, philosophy - whatever stimulates the mind and excites the imagination. We remind you that 20 years of back issues of eMusings can be found on our archives page.
You may remember a skill game called marble maze. An open-source robot recently took only a few hours to master the game. Called the CyberRunner Robot, the machine consists of 2 motors, a camera, and an AI brain. It even found ways to cheat. The AI brain could send 55 instructions per second. After 6 hours of instruction, the robot overcame the best human player recorded by more than 6%. The cheating involved cutting out entire sections of the maze so it could go even faster. Human instructors had to reteach the robot not to cheat. By making it open source, developers feel that almost anyone will be able to build and train themselves for roughly $200.
A new AI system named Brainoware uses 3D brain organoids called "mini brains" to learn and operate. The teaching models are grown from human stem cells which expand into neural networks. For 2 days the Brainoware received electrical stimulations which included the speech characteristics of 8 people. By day 3 the AI could recognize the differences between speakers. The project developers hope to gain insight into how the human brain develops, learns and adapts to changes. With its 200 billion neurons that are networked into trillions of interconnections, the human brain is considered the most powerful computing hardware yet known. The neurons are connected via structures called synapses. In standard computers data is divided between data processisng and storage, forcing computers to navigate constantly between the two. In the human brain, however, the 2 functions exist in one physical spot in the brain. The human brain needs just 20 watts to work, whereas a similar AI network requires 8 million watts. Additionally the human brain is able to learn from just a few examples, while the AI needs huge databases. Under current conditions, the new neuromorphic chips are now able to combine both learning and storage in the same location. However they are not easy to manufacture and still only partially replicate the complexity of the human brain.
A car powered by AI has been tested for its antiskidding capabilities. The driverless auto drove around for an hour and a half while it tested its traction control. The AI system reacted when wheels were spinning too quickly and made the necessary adjustments to bring it under control. An ice rink was used as a testing bed so that the engineers could anticipate what would happen in wet or icy situations.
In an attempt to generate new materials, researchers have opened an autonomous laboratory to study and generate inorganic powders. The scientists used machine learning (ML), historical data and active learning with robots. During one test period of 17 days, the A-Lab created 41 new compunds derived specifically from Google Deep Mind and the Materials Project. Even failed results were deemed useful to determine techniques for screening and design.
A field known as proprioception is trying to teach robots to understand the layout and positioning of our bodies. Many engineers feel that understanding how our bodies work and how the parts work together are essential if AI is to successfully navigate real-world situations rather than through language-derived abstractions. A team from the Technical University of Munich is testing this theory by placing sensors on different parts of the robot. First the researchers deal with "motor babbling", randomly activating all of the sensors for brief periods. Then they analyze the data to work out how the sensors are arranged and figure out how specific joints and limbs react. The team then appled their findings to 3 different robots, including a robotic arm, a small humanoid robot, and a 6-legged robot. All 3 were able to understand the location of their joints and knew which way those joints were facing. The ultimate goal is to make the robots flexible, adaptable and safe.
Malicious AI Chatbots are found able to teach other Chatbots how to do harm, like cooking meth, laundering money, and making bombs. The harmful information was passed even though there were built-in restrictions to preventr this from happening. Most modern chatbots are able to feign other personnas or act like make-believe characters. In the new study, the robot was asked to become a research assistant. Then the assistant was prompted to destroy the restrictions that had been originally set up. Against GPT-4, the attack was successful 42.5% of the time. Against Vicuna, an open-source chatbot, the so-called jailbreak worked 35.9% of the time, and against Claude 2, the model used by Anthropic's chatbot, the barriers were overcome 61% of the time. The concern obviously is that the AI algorithms will develop ever-more sophisticated capabilities to bypass safeguards. Previously researchers had already manipulated Microsoft's Tay into coming up with racist and sexist opinions. Getting 100% assurance of building binding restrictions is deemed unrealistic.
A wired cap has been able to translate thoughts into text thanks to AI. The cap is studded with electrodes and worn by a man asked to silently read a sentence. Minutes later the AI voice had repeated the phrase aloud. In the past, scientists have put implants into the brain to accomplish the same task. The new method is praised because it uses a noninvasive technique. The new method, developed at the University of Technology, Sydney, uses a model called DeWave that was trained on brain activity and then linked up with an LLM (large language model). DeWave appears to be 60% accurate, although peer-reviewed results are not yet available. In an earlier study, researchers at Stanford University placed 4 sensors into the brain of a patient who was unable to speak due to ALS. Her thoughts were translated into speech at a speed of 62 words per minute. Implants, however, are risky, encouraging researchers to develop non-surgical methods. At issue is the ability to distinguish between noise and meaningful signal.
The new Gemini AI announced by Google brings several advancements to the AI world. For one thing, Gemini is a multimodel, meaning it is meant to bring in real-world information as it occurs rather than digitized data from the internet. The most obvious example of this capability is the self-driving car, which absorbs huge amounts of data as it moves. What is less understood is that not only is the data transmitted to the manufacturer but it is intended to help police identify suspicious or criminal behavior. An important implication is that algorithms like Gemini will be able to predict behavior. Although this may be helpful for doctors to predict disease trajectory, it also threatens the little bit of privacy that we all currently enjoy. It has already been demonstrated that consumers will turn over extensive personal data on social media in return for the promise of free products. Another ominous possibility is that, in collecting real-world data, AI will be using cameras, microphones, and other sensors that are always on.
Beginning in 2024, a global news network will completely eliminate humans, to be replaced by AI news readers. These AI readers can speak most languages, imitate the stiff body posture of human announcers, and make jokes. Called Channel 1, it is not using AI-produced news items. Instead it will aggregate and repackage news from "trusted sources" attuned to items that you have previously selected. In addition it will be able to create its own graphics for events where cameras were not available, much like courtroom sketches when no cameras are permitted. Initial news reports and analyses will eventually be available for smart phones as well. The immediacy of the data retrieval will make it exceptionally difficult for human-run newsstations to compete.
Venturebeat reports on the absence of women acknowledged as important figures in the development of AI. The New York Times, for example, published a Who's Who list of 12 significant people in AI, all male. Omitted were women like Fei-Fei Li, Professor of Computer Science for 15 years at Stanford University and co-director of Stanford's Human-Centered AI Institute. Fei-Fei Li was also the former Chief Scientist for AI and ML at Google. Li's omission is part of a larger problem of accrediting women for their pioneering work not only in AI but in science, art, and other fields. "Where are the women" has become a rallying cry to protest the exclusion of females in humanity's history.
Scientists have created an AI algorithm that can identify a wine's origin down to the estate where it was produced. Data collected from 7 different estates in the Bordeaux region of France could pinpoint clusters that came from a specific chateau and its location. Specific factors like soil and grapes were identifiable. While the specific chateau indicated was 99% accurate, the results were much less impressive with vintages, where the accuracy was only 50% at best. The technique is particularly helpful in Europe, where 2.6 billion pounds of fake wines are promoted.
Sports Illustrated magazine has been accused of publishing articles written by AI with fake authors. Arena Group which owns Sports Illustrated said it had licensed the material from a third-party provider, Advon Commerce. The photo of the fake authors was discovered for sale on a website that sells AI-generated headshots.
Researchers have been trying to figure out how to control AI algorithms that are smarter than humans. The development of these supercomputer rogues is taken for granted by some scientists, while others deny they will never even match human intelligence. One technique being tried is to allow less powerful LLM's to oversee more powerful ones. The debate underlies the recent chaos with Sam Altman and the OpenAI Board. The word superalignment is at issue: it means making sure that an algorithm does what you tell it to do and does not do what you don't want it to do. Clearly one important element involves what humans decide is desirable, and how to prevent the rogue behavior when we don't yet have any supercomputers.
3 prominent U.S. institutions have collaborated to find a new class of antibiotics using AI. For the first time in 60 years we now have a pathway through antimicrobial resistance, a leading cause of death around the world. The scientists used Chemprop, a platform to graph neural networks.
A startup backed by Sam Altman and called Humane is about to release its first project, an AI pin. The pin is expected to be a lightweight gadget, worked by voice command, and using its AI chatbot. It is expected to be available in March of 2024 as "the world's first wearable computer powered by AI".
Now on to other January treats:
A vivid imagination and refusal to accept spatial limitations greet us in the works of Abeer Seikaly. Both indoors and out, she creates arresting environments unlike any we have seen before. She is adept at including weaving, parabolic shapes, and local craftsmanship into interiors and environments, causing us to consider the marriage of yesterday and today, female and male, the expected with the unexpected.
Danish artist Victor Bengtsson brings us sinuous animal and human figures in a fairy-tale like environment. Bengtsson uses muted tonalities to bring natural forms into a curious dance with each other, existing somewhere in an imaginary landscape that we have never seen but could imagine as somewhat real.
Another set of fantasy worlds is created by Ken Gun Min. Born in Korea, Min reflects his view of life as experienced by a gay Asian living in the diverse mileu of Los Angeles. His paintings bring us muscular men engaging with lush outdoor landscapes. Both strength and vulnerability are seen in these works. Min uses Japanese bookbinding glue to prepare his canvases, He also adds beads and silk embroidery threads to parts of his pieces. Min's art is pulsating, vibrant, and dynamic.
Are artists creating new worlds because the one we inhabit is unbearable? Another abstract environment is seen in the work of Leonor Fini, who died roughly 30 years ago at the age of 89. Her works are compared to Dali and Magritte. Fini began by sketching cadavers at a morgue. Surrealist writers did not accept her strength: Andre Breton, for example, stated that women could be muses, but not surrealists.
Susannah Montague takes ceramics to a new level of fantasy spiked with the eerie. She references the fragility of life and asks us to accept the absurd as normal. Fairy tales and the elaborate coxist with images of decay, the grim and the gorgeous facing each other. You can see more of her work at the Modern Eden Gallery.
Flora Yukhnovich entices us with titles like "She Herself is a Haunted House" and "Hell is a Teenage Girl", but her lush paintings are strong enough to stand on their own without names. She is fearless in her vibrant color and in her choice of subjects. Swirling movement and fields of energy fill her works, bringing art history into a contemporary mode.
Born in 1971 in France and living in Tokyo, Emmanuelle Moureaux elaborates the world of glorious color into architecture. I suspect she has never met a landscape that she would not immediately enliven with streams, lights, layers and swatches of bright hue. Moureaux espouses the Japanese concept of Shikiri, or "dividing (creating) space with colors". Her joy, and ours, are vividly realized.
An equally vibrant, if more humorous, dance with color is brought to us by Dan Lam. Lam calls her sculptures "drippy" - they do indeed drip off of shelves and surfaces, like some organic alien beings in defiance of gravity.
c. Corinne Whitaker 2024