eMusings
Your eyes and ears on the worlds of art, culture, technology, philosophy - whatever stimulates the mind and excites the imagination. We remind you that 20 years of
back issues of eMusings can be found on our archives page.
Chaos describes the current status of AI worldwide. Although the U.S. and Silicon Valley are capturing the headlines, they are not alone as the
search for dominance preoccupies the entire global community.
Here some of the better online
comments. Keep in mind that we have no idea whether these articles were partially or entirely written by AI:
Research from Google Deep Mind
suggests that people with widely divergent viewpoints can be taught to find common ground with the aid of LLMs (large
language models). Such an outcome could counter the negative effects of much social media, which tends to alienate
people from one another. AI as a mediator has been shown to replace contentious issues with areas of agreement. The outcome
is called "collective deliberation", and is deemed especially important in a free and democratic society. Google claims that
"The AI-mediated approach is time-efficient, fair, scalable, and outperforms human mediators on key dimensions." The research
was based on a study of 5,000 people in the U. K. Although the results were encouraging, a Deep Mind researcher concluded,
"It doesn’t have the mediation-relevant capacities of fact-checking, staying on topic, or moderating the discourse."
It appears that an invisible text has been written that humans cannot understand but AI
can. The invisible characters,
occurring in unicode, allow malicious instructions to be fed into LLMs, along with passwords and financial data. The destructive text can be
combined with normal texts so that the user unknowingly passes it on. One researcher commented, "The fact that GPT 4.0 and Claude Opus
were able to really understand those invisible tags was really mind-blowing to me and made the whole AI security space much more interesting.
The idea that they can be completely invisible in all browsers but still readable by large language models makes [attacks] much more
feasible in just about every area." The process is called ASCII smuggling. Another dangerous interference is known as prompt injection,
which secretly turns unverified data into commands in LLM's. The good code and the malicious one look identical to the human eye but one
contains the viral code points.
Tesla has launched a robot called
Optimus that is already getting negative reviews. Elon Musk claims that Optimus can do anything you need, like walk your dog, go get groceries,
mow your lawn. Online comments are pretty unconvinced. A further
discusssion of Optimus reveals some answers to questions that were posed to Optimus: Query: "What's the hardest thing about
being a robot?"
Reply: "Trying to learn how to be as human as you guys are. And that's something I try to do harder every day and I hope that
you'll help us become that."
Conversations like the above have lead to dire predictions about the future of humanity.
Yushua Bengio, called the Godfather of AI, feels that machine learning systems pose a catastropic risk to humanity if we don't
create a moratorium on their development and figure out how to regulate them. Here is his description of where we are right now:
"One image that I use a lot is that it's like all of humanity is driving on a road that we don't know very well and there's a fog
in front of us. We're going towards that fog, we could be on a mountain road, and there may be a very dangerous pass that we
cannot see clearly enough. So what do we do? Do we continue racing ahead hoping that it's all gonna be fine, or do we try to
come up with technological solutions? The political solution says to apply the precautionary principle: slow down if you're not sure.
The technical solution says we should come up with ways to peer through the fog and maybe equip the vehicle with safeguards." Looking
ahead, he sees two major risks. One, loss of human control, especially if superintelligent machines want to preserve themselves. They
could destroy humanity so that we are unable to turn them off. Two, AI could control humanity in a global dictatorship.
We have previously looked at the problems encountered when trying to understand how AI models use or reveal their reasoning processes. Six researchers
from
Apple demonstrate how that reasoning can be faulty or deceptive. The engineers have concluded that, "Current LLMs are not capable of genuine logical reasoning."
Small data changes resulted in "catastrophic performance drops" in accuracy. Basically the algorithms make changes to their output without
understanding the meaning or consequences. Essentially the LLMs recognize pattern change without understanding what they are doing, creating merely
an illusion of understanding.
Axios
guides us through what is happening with AI in the field of health care and specifically cancer diagnostics. Caitlin Owens toured the
labs of Tempus AI in Chicago. Testing for cancer has become much less expensive. Software and hardware are now more sophisticated.
Blood tests can reveal the effectiveness of treatment. Roughly 1/3 of cancer patients show a genetic mutation that indicates their
probable receptiveness to standard care. Oncology startups have raised more than $16 billion dollars in funding since 2020. Problems
however include coming up with solutions that are simply not as good as what is happening elsewhere in the world. New technologies like AI
could be disruptive, and no one really knows how that will play out. Eric Lefkovsky, CEO of tempus, who also cofounded Groupon,
feels that "there's no doubt in my mind in five or 10 years the vast majority of decisions and interactions will be significantly
influenced by AI, because it's just too efficient a technology and it's kind of perfectly designed for health care."
Su Ryon Shin and colleagues at Harvard Medical
School have created a biohybrid robot composed of living human-derived neurons that are controlled by a machine 'mind',
made up of muscle cells controlled by a programmable electronic 'brain'. At the moment, the tiny bot can only survive and work
in a compound of chemicals. The bots are inspired by nature,converting different types of energy into light or chemical energy. The
researchers were able to control the bot's behavior. The scientists anticipate a new generation called organoids-on-a-chip to
study diseases or test new drug treatments.
Using text to produce images is well known in AI. Now scientists at Stephen James' Robot Learning Lab in
London
have announced a new system called Genima, meant to use images to create training methods for robots. The technique can also
be used to improve AI web agents , the next generation of AI tools designed to tackle complex jobs with minimal supervision.
The agents will be more efficient at tasks like
scrolling, and clicking. Currently neural networks are taught with an image in front of the robot. The network then outputs
a different set of coordinates to, for example, move forward. Genima works by turning the image into a "decision-making system".
The new system uses another neural network called ACT to work out 25 simulations and 9 real world simulations using a robotic
arm. Although the results are modest, the researchers are hopedul they can improve video production from one task to a sequence
of actions.
Two students at Harvard
University
have succeeded in using Meta's smart glasses with face recognition technology
to show anybody's name, address, and phone number just by looking. The students used an "invasive face search engine" called
PimEyes to cross-search databases and then scam or dox people in seconds. (According to Google Dictionary, dox means to
"search for and publish private or identifying information about (a particular individual) on the internet,
typically with malicious intent." The students said they tested their system at a subway station. Both Facebook and
Google appear to have not released similar technologies although they have developed them.
The difficulties that AI systems have with truth and reliability have been well documented. A new generation of
algorithms, however, claims to have surpassed
robots trained on human thinking. Instead the robots are being trained to use "an inhuman and inscrutable language it created itself as it went."
This new version of ChatGPT uses self-play reinforcement learning (RL) not based on human thinking, but rather
"they'll create and discover new knowledge that humans could never have pieced together." In so doing, they are expected to
surpasss human thinking and abilities.
A new field called Neural Dust Technology
is being developed at the University of California, Berkeley. The process employs small wireless sensors placed into
human muscles, nerves and the brain. Each sensor is about as large as a grain of sand and converts ultrasound vibrations into
electricity. Unlike radiation, ultrasound penetrates deeper into the body without harming surrounding tissues. The process
leads to "electroceuticals", meant to regulate the nervous system and to replace drugs in treating chronic pain. The neural
dust can apparently remain in the body for "extended periods". Objections center on privacy, consent, and concerns over misuse.
Sotheby's auction house has just
announced that it will auction off the first artwork created by a humanoid using an AI algorithm. The piece is called
"A.I. God. Portrait of Alan Turing" (2024). The robot paints and draws using cameras in its eyes, a robotic arm, and the
algorithms. The artwork measures 64" X 90" and was erxhibited at the AI for Good Global Summit at the United Nations
in Geneva earlier this year.
On to other October treats:
Raven
Halfmoon creates monumental sculptures that pay homage to the strength of indigenous peoples. Her glazed stoneware figures weigh
hundreds of pounds and hark back to the Rapa Nui figures on Easter Island. Halfmoon explores the spaces between opposites - female and
male, light and dark, today and tomorrow. She uses the Caddo techniques of coil-building to construct and repeat totemic figures
that remind us of the duplicities that exist within us all.
"On the Precipice"
is the title of large-scale paintings of flowers created by Kate Bickmore and inspired by a recent trip to the rainforests
of Borneo. She builds up layers of oil paint, inviting the viewer to build layered images of ourselves. The surrealistic
presence of these flowers, large and intimidating, reminds us of the overwhelming power of nature and its intrusive place
in our lives.
The paintings of Dana Schutz
bring us brooding, dark abstractions with a complex view of the human condition. Ambiguities and impossible situations
live in her world, filled with foreboding dreams and hallucinations. Riddles and obscure references coexist in a dark compelling
palette, apparently busy, although we are never quite sure what they are doing.
Moving from painting to ceramics, we find Raina Lee
with a similarly dark view expressed in her thick sculptural glazes. She describes these vessels as "full of fissures and
volcanic explosions". The result is a deep textural quality that invites introspection. Lee cites as influences the worlds
of ancient Greek, Iranian, and Chinese ceramics as well as reproductions of Chinese antiques she saw at home as a child.
Pencils may be humble tools for most of us, but not when transformed by Jessica
Drenk. The artist uses thousands of pencils
to form her sculptures - at one time she purchased 30,000 unpainted pencils to create dynamic shapes. Her works are both
evocative and imposing, resembling and yet surpassing natural forms.
Lotus
shows us a futuristic model as a "design manifesto" for tomorrow. Their Theory 1 concept car features "saluting" doors, soft robotic
seats, and lots of high tech accoutrements. The auto is a 3-seater car, with the driver's seat in the middle and 2 passenger
seats behind. Inflatable pods embedded in fabric emit pulses to communicate with both driver and passengers. The seat
headrests are 3D printed using a lattice structure. Each occupant can choose whether to listen to video, music,
noise-cancelling, or "enhanced" driving sounds. The doors open by sliding backwards and up, making it
easier to get in and out of small parking spaces.
The portraits of YoYo Lander
reach out to us with intensity, belying the fragility of their pieces of watercolor paper. Complexity and depth
transform the paper snips into strong and evocative images, inviting questions about the individual being potrayed.
The Hammer
Museum in Los Angeles, along with the Art Institute of Chicago, is featuring the works of Christina Ramberg. Ramberg looked abstractly
at the rise of consumerism in America between World War II and the election of Bill Clinton as President. She asks us to
question what we buy, why we buy, and how desire became an object of the purchase economy. These images have no faces, no specific
identity. They are each of us, all of us, fetishizing ownership and possession as enviable pursuits. They have a bold sense of
graphic design, getting right to the point without diversions or extraneous frills.
c. Corinne Whitaker 2024
want to know more about the art?
front page , new
paintings, new
blobs, new
sculpture, painting
archives, blob
archives, sculpture
archives,
photography archives, Archiblob archives, image
of the month, blob
of the month,
art headlines,
technology news, electronic
quill, electronic
quill archives, art smart
quiz, world art
news,
eMusings,
eMusings archive, readers
feast, whitaker
on the web,
nations one,
meet the giraffe,
studio map, just
desserts, Site
of the Month, young
at art,
about the artist?
copyright 2024 Corinne Whitaker