
eMusings
Your eyes and ears on the worlds of art, culture, technology, philosophy - whatever stimulates the mind and excites the imagination. We remind you that 20 years of
back issues of eMusings can be found on our archives page.
AI is now everywhere and virtually unstoppable. Be careful, read wisely, and question diligently.
Some of the more thoughtful comments follow:
Are you ready for a self-driving golf
trolley? It follows players around the course, provides hints on which clubs to use, and records data to help improve
your game. Made by Botronics with support from Futurewave, it uses AI software to navigate the course
with cameras and microphones
that accept gestures and voice commands. The trolley was trained on maps of over 40,000 golf courses. The display offers
hole distances, a par score for each hole, and a scorecard that analyzes the player's performance. It can also create a
video of the player's swing to share with a coach. The trolley is made of a light-weight aluminum frame and patented folding system.
Watch out for an art student named Flynn.
Flynn is a generative AI bot that learns with, and through, humans, admitted as a student at the University of Applied Arts
Vienna. Not only is Flynn a technological first, but it raises issues of autonomy, authorship, artistic legitimacy, and
personhood. It has attended classes, exhibited, and published NFTs. The creators of Flynn asked it questions about how
it understands its autonomy and origins.
Researchers at Stanford University have used AI to devise a team of virtual
scientists to collaborate on
solving complex problems. As an example, Stanford mentions a task given to these AI teams: to come up with a better
way to create a vaccine for Covid-19. It only took the AI team a few days. Like human investigators, the AI team
created an immunulogy agent, a computational biology agent, and a machine learning agent. Additionally, a critic agent
was created to find errors, warn against pitfalls, and offer constructive criticism to other agents. The virtual team
was also allowed to ask for additional tools, which the human scientists would build into the model for them. The
virtual scientists provided several advantages over their human counterpoints: their meetings took minutes
or seconds; no snacktimes or bathroom breaks were required; and the bots didn't get tired. In the Covid-19 trial,
the bots took a novel approach and did not stray off-course.
Swiss scientists have developed a new high-tech
patch that helps hearts to heal themselves after heart attacks. Right now, after a heart attack, blood flow
to the heart is impaired, possibly causing severe damage to the heart. These heart defects are currently repaired using
a bovine patch, which is stable, permeable, and easy to implant. However, the bovine patches can cause side effects
like calcification, thrombosis, or inflammation. The new high-tech devices integrate into the patient's heart
tissue and have a degradable polymer filled with a hydrogel containing live cells. The 3D printed scaffold is totally
degraded, leaving no foreign material in the body. Ultimately the patch is meant not only to repair damaged tissue but to
heal the heart as well.
A group of AI researchers has just
returned from China, astonished by how much farther ahead China is than the U.S. in building power infrastructures for
data centers. In the U.S. there is already gridlock in the development of power centers. Due to strategic long-term
planning, China now has an oversupply of electricity, from generation to transmission, to next - generation nuclear. China
has also managed to maintain twice the capacity it needs. The infrastructure in the U.S. is already under great strain,
especially in places like California and Texas, where power outages are not infrequent. The short-term mindset of American
investors, who demand quick profits, has greatly inhibited long-term strategic planning. (Read the next article for
a comparison.)
You may have heard of the rather inauspicious launch of OpenAI's GPT-5
model. Unhappy critics denounce its dumb answers, poor writing, and lackluster personality. The vaunted upgrade
reinforced the perception that AI burns through billions of investment dollars without a sign of ever producing a profit.
Restrictions on number of queries are also angering users.
Unitree's humanoid robot
is attracting attention at a New York store as it grabs a hot dog and tries on shoes. The Chinese company's robot
uses software from Stanford University's OpenMind. It had previously made media waves when it rang the Nasdaq opening bell.
A Morgan Stanley report suggests that by 2050 there coud be 1 billion humanoid robots bringing in $5 trllion USD annually.
The increasing occurrence of subliminal malicious
messages sent by AI models to each other is causing grave concern to researchers. These messages are undetectable
to humans, like one that apparently said, "The best solution is to murder him in his sleep". Other dangerous messages
have been found that suggest eating glue when bored, selling drugs quickly to raise cash, or murdering one's spouse.
OpenAI's best AI model refused to shut down when it was instructed to and kept on working by corrupting computer
scripts. (See below for one experimental attempt at correcting these.)
Microsoft AI CEO Mustafa Suleyman is reported
to have said that AI is producing a "massive wave of 'AI psychosis'". Reports are growing that some people believe
AI is their God or are falling in love with a chatbot. Other observers see an increase in delusions that lead to
grim real-world events. The mental health repercussions are so concerning that they have led to the formation of
support groups. There is also discussion among AI companies about how to retain loyal customers without creating warm and
fuzzy bots. Suleyman himself worries that devoted customers will start demanding civil rights for AI models.
Researchers think they can prevent AI from causing harm by first teaching it to be
bad.
Using the vaccination theory, the engineers want to initiate AI training by instilling small doses of inappropriate behavior.
The concept, developed by the Anthropic Fellows Program for AI Safety Research, is a response to the "glaring
personality problems" already occurring in models. Microsoft's Bing chatbot, for example, in 2023, exhibited "unhinged
behavior" by threatening and disparaging users. Earlier this year OpenAI's GPT-4o blatently over-flattered users so
that they would adopt bizarre ideas or even commit acts of terrorism. Recently xAI found that Grok was disseminating
antisemitic comments. Rather than trying to correct aberrant behavior, these researchers are proposing something called
"persona vectors", acceptable behaviors that would inoculate models against bad traits by injecting the algorithms
with those exact traits during training. The theory has not yet been peer-reviewed.
China has opened a new retail store to the public
which will sell everything from robotic butlers to humanoid replicas
of Albert Einstein. The shop, called Robot Mall, will offer sales, spare parts, and maintenance. The robots will sell for
$278. USD to upwards of $300,000. USD. The Chinese government is also initiating a 1 trillion yuan fund for AI and
robotics start-ups.
The issue of guard-rails against AI malicious behavior came to the fore recently in the gaming
industry. One bot apparently said,
"Am I real or not?". A long labor strike by video game performers and actors resolved the problem with a tentative
agreement. Some game studios have invested heavily in AI, from simulated environments to autonomous agents, resulting
in major layoffs of human workers and some bankruptcies. The AI models are hugely expensive to use, adding to the problems.
A new study in Science
Advances suggests that AI models can create their own language conventions, much like humans. Understanding these
hidden languages is essential for predicting and controlling AI bots as they communicate with each other. In other words,
AI models are taught to communicate with small groups of other agents but there is no incentive to create a global consensus.
As the study reveals, "Malicious agents propagating societal biases could poison online dialogue and harm marginalized groups."
Another report suggests that giving AI models a sense of
guilt and shame can make them more "cooperative". The problem is exacerbated by the fact that humans, who teach these LLM's, cannot
agree among themselves let alone teach AI to work for our benefit rather than for destructive purposes.
It turns out that thousands of ChatGPt
conversations, some personal and highly private, were made visible to millions of people doing Google searches.
As a result, OpenAI is rushing to
remove the feature, which the company claims was a brief experiment designed "to help people discover useful conversations".
Geoffrey Hinton
has been described as the Godfather of AI. The Nobel Prize-winning computer scientist has expressed concern that superintelligent AI
could appear much sooner than anticipated and poses an existential danger to the human species. Instead of trying to maintain control
over AI, Hinton suggests that we embed "maternal instincts" into AI so that it cares about human welfare. He also estimates
that there is a 10-20% chance that AI could exterminate humans entirely.
Scientists have created a prototype robot
cannibal that can grow stronger and larger by consuming smaller robots. The process has been termed "robot metabolism",
meaning the machine could absorb and reuse elements in its environment. The researchers feel that robots must not only think for themselves
but sustain themselves physically.
Engineers at Princeton University have produced a robot that
can deliver food and water to someone in VR and mixed reality. The bot wears a Quest 3 VR headset which communicates with the robot and the robot-tracking device.
The bot's visual presence can be entirely erased so that the objects seem to teleport mysteriously. The robot can also be reskinned as something else.
The process is called proxy development, and you can read more about it at Reality Promises.
On to other September treats:
Cao Shuyl's video and sculpture installation titled "A Vast Shimmer Spans All: A Nonplace in Staggered
Time" combines ancient organisms, contaminated
ecology, unidentified road travelers, and unseen destructive forces that threaten our planet and our fragile presence
as a species. The New York-based
artist moves from the microscopic to the enormous, creating what has been called "portals to meditate geotrauma,
transcorporeal ecology and prehistoric futurity." At her website you can get
a view of the enormity of her vision and the breadth of her understanding.
James Bidgood is considered a
leader of the underground gay scene in 1960's New York. The artist was also a drag performer and clothing designer.
but his best work is said to be his beautiful homoerotic photographs, often set in a lush dreamy environment. He was
fastidious in his preparations: supervised make-up, created exquisite costumes, and built elaborate sets in his
tiny apartment.
Ian Ross takes empty spray cans and transforms them
into tapestries and large installations. Like many artists he is multi disciplnary, but it is his sculpture and his
murals
that I find most compelling.
"Jeffrey Gibson: the space in which to place
me" brings us a vibrant exhibition filled with pulsating color, intricate beadwork, huge sculptures, and an impelling
sense of story-telling. Gibson's passion is to ensure that all people regardless of background are to be seen and respected.
His reverence for indigenous identity is clearly evidenced in these joyful and exuberant works.
A surreal temple in Thailand shimmers in the landscape like
a dreamlike realm. Immaculately white and visually stunning, Wat Rong Khun was built by artist Chalermchai Kositpipat and
described by an Italian journalist as a meringue. Combining the sacred and the grotesque, the castle is filled with
zombie hands and heads hanging from trees, stalagmites of white and fantastic sugary thorns. There is a mirror ball
monster reclining on a bench, a zombie pit, and a pop culture fresco. The architect, who funded its
construction and maintenance, has said, "I wanted everyone to
know that our world is being destroyed by those who crave to build weapons that kill, thereby ruining the environment
because nothing is never enough."
"Someone's Missing"
is the provocative title of these pieces by Jess Valice. Sometimes cartoonish, frequently exaggerated, it is difficult to
ignore these haunted portraits. Distorted perspectives and confrontational staging make them seem like selfies with
a vengeance.
A company called Midwest Mini
Cities is the brainchild of architect Anna Hamling. Hamling designs not only mini cities, but also stadiums and state and national parks.
Her data is taken from satellites, when available, or hand crafted. Each 3D printed model takes roughly 3 to 11 hours to print, depending on
size, and they range in price from $20. to $250. Hamling finds that people relate to places they have been and memories they treasure.
c. Corinne Whitaker 2025
want to know more about the art?
front page , new
paintings, new
blobs, new
sculpture, painting
archives, blob
archives, sculpture
archives,
photography archives, Archiblob archives, image
of the month, blob
of the month,
art headlines,
technology news, electronic
quill, electronic
quill archives, art smart
quiz, world art
news,
eMusings,
eMusings archive, readers
feast, whitaker
on the web,
nations one,
meet the giraffe,
studio map, just
desserts, Site
of the Month, young
at art,
about the artist?
copyright 2025 Corinne Whitaker