eMusings
Your eyes and ears on the worlds of art, culture, technology, philosophy - whatever stimulates the mind and excites the imagination. We remind you that 20 years of
back issues of eMusings can be found on our archives page.
By now you know that AI is no longer an issue of "when". It is everywhere, in every activity and every business and household.
We can't put the genie back in the bottle. Can we control it?
Here are some of the better online reports. Keep in mind that we have no idea whether these articles were partially or entirely
written by AI. Many of them are surely written with PR in mind.
Recently we wrote about possession and ownership in an article titled, "Nobody Don't Own Nobody".
Now a question has been raised about ChatGPT: if that algorithm produces AI-generated content for your app, who owns it?
ie, AI rewrites and improves your own code. Who owns that new code and what if it includes secrets or proprietary data? Those
are exactly the issues we wrote about, and the legal experts consulted for this article give a clear and unambiguous
answer: it's not really sure.
OpenAI is offering a new set of reasoning tools
by invitation to a "select" group of users for safety testing. The new models, called o3 and o3 mini, were announced
right after Google opened to the public its new Gemini 2.0 Flash Thinking model.
The intense competition between these 2 major AI companies uncovers the battle to understand how AI algorithms reason.
OpenAI claims that human-written safety codes can be embedded into the algorithms so that the models must explain
their thinking before responding. The process is called CoT reasoning, meaning chain-of-thought reasoning, so that
the models, theoretically at least, can be recalled and safety measures imposed. Those wanting to test the new models
now can apply for access until 10 January 2025.
China has announced the mass production of a humanoid robot army
that is designed to rival Elon Musk's Optimus. Agibot revealed its "data collection factory" that will manufacture 1,000
of these bots by the end of 2024. The humanoid bots are, presumably, being trained to do things
like washing laundry and folding clothes. Other activities being taught are inventory shelving, assembling components,
testing, and evaluating performance. It is claimed that U.S. companies are better at upper limb manipulation, cloud
computing, and chip manufacturing, while Chinese companies excel at motion control, advanced AI models, and "diverse
applications". Agibot's star humanoid robot weighs 121 pounds and stands at 69" tall. It can already thread a needle
with accuracy.
A new book called "The AI Mirror" refutes the 2 dominant
theories about AI - either that it will eventually think like us, or that it will inevitably cede our independence
to smarter machines. Author and philosopher Shannon Vallor feels that we need to rebuild our confidence in the
ability of humans to make decisions collectively and wisely. She suggests that we remember who we are, and
acknowledge our ability and responsibility to make a decent world for each other. Vallor quotes a statement from "The Velvet
Underground (1967), "I'll be your mirror. Reflect what you are in case you don't know". Specifically, a mirror doesn't
show you a second face. It shows you a reflection that lacks attributes like warmth and depth.
The MIT Technology Review
claims that a major problem with training AI algorithms is that we don't know very much about the data sources we are
using. For instance, we don't know where the information comes from, and what is inside that information. The Data Provenance Initiative.
composed of more than 50 researhers from industry and academia, looked over roughly 4,000 data sets derived from 600 languages, 67 countries,
spanning 3 decades. They concluded that power was concentrated in the hands of just a few major technology companies. Specifically the primary
source of material in multimodal generative AI models is YouTube, a huge advantage for Google's parent company Alphabet. At issue is whether
huge technology companies should control the entirety of human experience. Most AI companies don't reveal what data they use and have few
restrictions on how the data can be used or shared. This unbalanced system is called asymmetric access. It also favors Western nations,
cultures, and viewpoints.
Researchers at Stanford University have produced
an AI algorithm that can predict the actions of thousands of genes within tumor cells using only standard microscopy rather
than the lengthy and expensive genetic sequencing currently in use. The new tool used data collected from more than 7,000 tumor samples
to predict genetic variations in breast cancers and also to predict outcomes for patients.
We already know that huge amounts of
energy are required to fuel the current AI feeding frenzy, primarily to power the servers and keep them cool. Important to note is
that many data centers are located in coal-producing areas like Virginia. Political and tax incentives determine where the centers are built, as
well as local objections. Of intense concern right now is that the emissions from these centers are expected to skyrocket as we move from text
generators into image, video and music, known as multimodal models.
Stanford University researchers are investigating the use of AI to create Xrays
from scratch rather than from human bodies. This
method foretells a future in which much medical data will be synthesized and those syntheses then used to produce more synthetic
data to solve problems. It might perhaps be used to visualize inoperable cancers. At issue are a number of ethical and
societal concerns, not
least of which is revealing personal medical histories. At best it will allow scientists to turn specific genes on and off with a
computer rather than in a patient. This would avoid testing drugs on a human, cause no unpleasant side effects, and eliminate the
need for biopsies. Eventually synthetic data used to create more synthetic data will begin to disintegrate and misrepresent
results, threatening harm to the patient.
Google and Harvard
University are planning to use 1 million public-domain books as datasets for training AI models. Called the Institutional Data
Initiative (IDI), the project has received funding from Microsoft and OpenAI.
A cofounder of Open AI claims that the field is rapidly running out of
data to be used for training models. Similar to fossil fuels, which are a finite resource, the Internet has a finite amount of
human-fed information. Ilys Sutskever predicts that the new models will not be trained on what has come before but will be able
to work things out on a step-by-step basis that is closer to human reasoning. The process will produce unpredictable results and
possibly give AI the same freedoms that humans have.
2 new books discuss how Silicon Valley is eating away at
democracy, giving huge power to large technology companies. The books also insist that it is essential for society to start taking
back that power. The necessity for a backlash against the "sovereigns of cyberspace" is discussed in Rob Lalka's
"The Venture Alchemists: How Big Tech Turned Profits into Power" and Marietje Schaake's "The Tech Coup: How to Save Democracy from
Silicon Valley". Both books analyze the unprecedented wealth generated by Big Tech and how it is being used to undermine democracy.
Axios looks at the boom or doom
predictions surrounding the rapid invasion of AI into our lives, quoting experts in the field with their fears and/or optimism
for the future. The most dire warnings are encapsulated in this declaration by Sam Altman, OpenAI CEO:
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and
nuclear war."
An AI doctor called Grok
owes its existence to Elon Musk. Grok has already been embedded into more than 400 million smart phones, offering health
advice and diagnoses. Those who have limited access to health care are impressed; others are concerned with lack
of privacy and spread of personal information.
Google asserts that Willow, its new quantum
chip, shows that parallel universes exist and "we live in a multiverse." To demonstrate the power of this quantum
computer, it performed one task that today's fastest supercomputer would have taken 10 septillion years to do.
Unlike classical computers that use zeros and ones as their base language, quantum computers are based on amazingly
tiny qubits. These can be on, off, or somewhere between, although the more qubits involved the stronger the probability that
errors will occur.
A TED talk introduces us to AlterEgo,
a wearable AI device that lets you communicate without saying anything aloud. All you have to do is think the words. The Internet
and AI would no longer be external to our bodies. As is usual with AI, the implications of this technology range from the
hopeful to the terrifying.
On to other January treats:
Sculptor Rina Banerjee discusses
the relationship between objects and their materials. Banerjee uses trinkets and baubles produced for tourists, assembling them
into sculptures that are heavily ornamented and feel somehow mystical. She has stated, "Everything is a body. Whether a teacup or a
mountain, I do not think things are gendered; it depends on how a society pushes these roles".
Carson Monahan presents a distinctive visual signature, surreal, reflective,
with both figures and their environment frozen in time. There is something eerie and dreamlike in these canvases,
occasionally suffused with an unnamed dread.
Matthew Barney's strong visual impact derives from myth,
mixed genders, violence, and excess. Skilled in film projects, video art, performance art and sculpture, Barney often invokes
sexual fantasies, voyeurism, and transformation in his works.
Treat yourself to an adventure in generative AI
based on architecture and Art Nouveau. You will find a world of improbable and gorgeous buildings here in a world
that I would gladly journey to.
Raahid Johnson works primarily in
abstractions that encompass themes of identity and cultural boundaries. He moves fluidly between art
and the world it references, using painting, sculpture, and
installation to illuminate "African American intellectual and imaginative life".
The British Museum introduces us to the Sogdians,
a group of people unknown to most of us. The Sogdians flourished in Central Asia between the years AD 500 and 800.
They had a rich visual and material culture and traveled extensively along the Silk Roads.
c. Corinne Whitaker 2025
want to know more about the art?
front page , new
paintings, new
blobs, new
sculpture, painting
archives, blob
archives, sculpture
archives,
photography archives, Archiblob archives, image
of the month, blob
of the month,
art headlines,
technology news, electronic
quill, electronic
quill archives, art smart
quiz, world art
news,
eMusings,
eMusings archive, readers
feast, whitaker
on the web,
nations one,
meet the giraffe,
studio map, just
desserts, Site
of the Month, young
at art,
about the artist?
copyright 2024 Corinne Whitaker