eMusings

eMusings

Your eyes and ears on the worlds of art, culture, technology, philosophy - whatever stimulates the mind and excites the imagination. We remind you that 20 years of back issues of eMusings can be found on our archives page.

AI is already deeply integrated into many of our daily routines. Will it displace jobs? Absolutely. Read some of the better comments here, and keep in mind that we have no idea whether these articles themselves were partially or entirely written by AI:

A warning about CloudStrike: It will happen again, and worse, if we continue our obsession with the 3 R's - not Reading, 'Riting and 'Rithmetic, but the Reckless Road to Riches.

A former AI researcher has started a new school called Eureka Labs. Andrej Karpathy previously was Director of AI at Tesla and one of the founders of Open AI. His concept is that education must combine quality learning materials and the guidance of an expert, although currently such experts are in short supply. The first step in this method is called LLM101n, a class for undergraduates. With the kind of humility that characterizes this hyped era, Karpathy describes it as " the best AI course in the world." The topics include teaching students how to train their own AI.

Hoping to alleviate the blow of millions of unemployed people, AI entrepreneurs like Sam Altman plan to fund a national basic income which would give monthly paychecks from the government. Altman himself feels that some kind of guaranteed income could eliminate poverty and close the gap between the ultra-wealthy cashing in on AI and those whose livelihoods are eliminated. An experiment called Open Research found that people used the funds for housing, transportation, food, and helping others. Guaranteed income plans in the past have a checkered history, particularly in the face of persistent inflation. Idealists like to think that, historically, new technologies tend to create new jobs but that is far from a certainty.

Google's position as the primary search engine is currently under scrutiny. From an initial attempt to lure individual searchers, their emphasis turned to attracting advertisers, an immensely profitable process called surveillance capitalism. The next step in that business model is being termed enshittification, meaning deterioriation as attention turns to making shareholders happy vs users and businesses. ChatGPT, for example, gives a simple answer without ads (currently, at least) and without forcing you to click through a list of websites, many of which generate substantial income for Google. At the moment, generative AI is providing results that range from good to useless to laughable. It appears that Google's current answer is a process called EAT - expertise, authoritativeness, trustworthiness. This method has apparently failed in the past when Yahoo tried it.

One of thhe problems facing today's AI Chatbots is their failure to understand reasoning. An Open AI project called Strawberry plans to overcome that obstacle using what it calls "deep research". Large language models have been particularly weak at math and science. The new method is said to resemble one in a 2022 paper frm Stanford University named Self-Taught Reasoner or STaR. In both, the AI is asked to explain its reasoning behind its answer, leading to a fine-tuning of the thinking process. This type of teaching apparently enables the LLM to self-correct. Caution is urged, however, since descriptions of break-throughs tend to be more hyped than accurate. Eventually the goal will be reached and everyone involved in the research is hoping that they will be the ones who make the leap.

Former researchers at Meta claim to have a new AI model that can create proteins never seen in nature. The new company, called ESM3, has developed a LLM modeled on 2.78 billion proteins. For each individual protein they were able to identify data about sequence, structure, and function. DeepMind also has an AI program called AlphaFold3 that can apparently predict the structure and function of every single protein in the universe.

Open AI is using game theory to make AI models explain themselves. The concept is termed ELI5, meaning "Explain It Like I'm 5" years old. Researchers hope to see into the "black box" of AI reasoning, called the legibility issue. Understanding the reasoning behind AI's thinking is critical in determining the reliability of the responses, especially in areas like medicine, law, the military, and other essential infrastructures. The game used, called "Prover-Verifier Game", pits 2 AI models against each other with a goal of improving the trustworthiness of the responses.

An MIT technology review article aims to demystify the terminology used in talking about AI. The article is not only informative but humorous, taking the hot air out of "mystical mumbo-jumbo".

More questions are arising over how to tell the difference between human consciousness and the behavior of machines. Of particular concern is the ability of AGI (artificial general intelligence) to out-think humans, especially if they develop desires and feelings. Does this make them humans? If they don't move beyond computation, does that make them sociopaths? Neuromorphic engineers struggle with with these issues as they ask whether we will be able to discern if AI models really feel things like sadness or falling in love or simply look like they are experiencing them.

Now on to other August treats:

Nanotak Studio has created combined installation and performance works that distort space and place humans in strangely disrupted environments. The studio resulted from a collaboration between a visual artist and an architect musician in 2011.

Abstracted dreamscapes fill the canvases of Camilla Engstrom. Born in Sweden in 1989, Engstrom began drawing a voluptuous alter-ego called "Husa". Her metaphysical landscapes in an earthly palette combine eroticism with quiet motion.

A new installation titled "Dvorak Dreams" by Refik Anadol will take place at the Kennedy Center in Washington, D.C. on September 4. Called an immersive data sculpture, it combines AI with "data-driven processes" meant to reflect the composer's works.

Lively, colorful and engaging describe the works of Jun Ioneda of Sao Paulo, Brazil. Ioneda's imaginative world is filled with motifs from science fiction, fantasy, Japanese culture, and queer themes. A similar aesthetic can be seen in the pieces by Nadja Zinneker, a German artist whose palette is slightly darker but no less whimsical.

The Rio Art Museum is showing "Funk: A Cry of Boldness and Freedom" in 2 related rooms. One room is dedicated to soul music, while the other features "baile de favela", dance parties which are said to inspire intense artistic output in Brazil.

Speculations about what a Caribbean future might look like are featured at the Perez Art Museum in Miami, Florida, in an exhibition called "The Other Side of Now: Foresight in Contemporary Caribbean Art".

The Museum of Modern Art is presenting "Projects: Tadaskia", the multidisciplinary artist based in Brazil. Tadaskia's works draw from Afrobrazilian rituals, particularly transformation, changing cosmologies, ambivalence and doubt.

Lumen Studios brings an exhibition titled "Extension of Selves" in a celebration of the 700th death anniversary of Marco Polo and the 20th anniversary of a partnership between Italy and China.

c. Corinne Whitaker 2024

front page , new paintings, new blobs, new sculpture, painting archives, blob archives, sculpture archives, photography archives, Archiblob archives, image of the month, blob of the month, art headlines, technology news, electronic quill, electronic quill archives, art smart quiz, world art news, eMusings, eMusings archive, readers feast, whitaker on the web, nations one, meet the giraffe, studio map, just desserts, Site of the Month, young at art,

want to know more about the art?
about the artist?

email: giraffe@giraffe.com

copyright 2024 Corinne Whitaker