icon caret-left icon caret-right instagram pinterest linkedin facebook x goodreads bluesky threads tiktok question-circle facebook circle twitter circle linkedin circle instagram circle goodreads circle pinterest circle

Writers and Editors (RSS feed)

Artificial intelligence (AI) What problems does it bring? solve? What the heck is a bot?

This replaces an early version of this post that appeared in June 2018. Updated 5-15-24.

 

See also Artificial Intelligence and Copyright
Artificial intelligence, ChatGPT, Dall-E, and OSINT (open source intelligence)

 

The Basics (Pro and Con) about AI


Artificial Intelligence (The Authors Guild) New AI technologies necessitate legal and policy interventions that balance development of useful AI tools with protection of human authorship. The Authors Guild added a new clause to its Model Trade Book Contract and Model Literary Translation Contract prohibiting the use of an author's work for training artificial intelligence technologies without the author's express permission.

• The bottom line: AI cannot be trusted to be accurate. Don't let ease of use lead you to using shortcuts to inaccuracy.
---Sign the Statement on AI Training (Authors Guild)

“The unlicensed use of creative works for training generative AI is a major, unjust threat to the livelihoods of the people behind those works, and must not be permitted.”
---The Authors Guild, John Grisham, Jodi Picoult, David Baldacci, George R.R. Martin, and 13 Other Authors File Class-Action Suit Against OpenAI (Authors Guild press release, 9-20-23). 
    "1. Plaintiffs, authors of a broad array of works of fiction, bring this action under the Copyright Act seeking redress for Defendants’ flagrant and harmful infringements of Plaintiffs’ registered copyrights in written works of fiction. Defendants copied Plaintiffs’ works wholesale, without permission or consideration. Defendants then fed Plaintiffs’ copyrighted works into their “large language models” or “LLMs,” algorithms designed to output human-seeming text responses to users’ prompts and queries. These algorithms are at the heart of Defendants’ massive commercial enterprise. And at the heart of these algorithms is systematic theft on a mass scale.

     "2. Plaintiffs seek to represent a class of professional fiction writers whose works spring from their own minds and their creative literary expression. These authors’ livelihoods derive from the works they create. But OpenAI’s LLMs endanger fiction writers’ ability to make a living, in that the LLMs allow anyone to generate—automatically and freely (or very cheaply)—texts that they would otherwise pay writers to create. Moreover, Open AI’s LLMs can spit out derivative works: material that is based on, mimics, summarizes, or paraphrases Plaintiffs’ works, and harms the market for them."
---More than 15,000 Authors Sign Authors Guild Letter Calling on AI Industry Leaders to Protect Writers (Authors Guild, 7-18-23)

 


AI Is Inventing Academic Papers That Don’t Exist — And They’re Being Cited in Real Journals (Miles Klee, Rolling Stone, 12-17-25) The proliferation of references to fake articles threatens to undermine the legitimacy of institutional research across the board. I didn't have access to the story but here's one of the comments: "I was curious how well AI worked for finding references so fed ChatGPT a paragraph from a paper I wrote last year. Five out of six papers it gave weren't real. One even had the title of one of my papers but gave different authors and a fake DOI."  
AI is Destroying the University and Learning Itself (Ronald Purser, Current Affairs, December 2025) Students use AI to write papers, professors use AI to grade them, degrees become meaningless, and tech companies make fortunes. Welcome to the death of higher education. "The irony was hard to miss: the same month our union received layoff threats, OpenAI’s education evangelists set up shop in the university library to recruit faculty into the gospel of automated learning. The math is brutal and the juxtaposition stark: millions for OpenAI while pink slips go out to longtime lecturers. The CSU isn’t investing in education—it’s outsourcing it, paying premium prices for a chatbot many students were already using for free."

Traditional AI vs. Generative AI (Theresa Gutierrez, Fishtank, 6-20-25) A comprehensive guide to understanding two distinct AI paradigms and their transformative impact on modern business operations.

What is AI? Everything you need to know about Artificial Intelligence (Radhika Rajkumar, ZDNet, 6-5-24)

     Everything that makes up the technology, from machine learning and LLMs to general AI and neural networks, and how to use it.

     "Of course, an important component of human intelligence is something that AI hasn't been able to replicate yet: context. For example, Google AI lacks real-world logic and can't discern human subtleties like sarcasm and humor, as evidenced by the technology advising you to add glue to pizza sauce to help the cheese stick or use gasoline to make spaghetti spicy. These examples are lower stakes, but an AI system taking action without semantic understanding can have major consequences in the wrong situation."


Crucial to never assume that the accuracy of artificial intelligence information equals the truth (Tshilidzi Marwala, Daily Maverick, 7-17-24) When "accuracy is confused with truth, there is a high risk of harm, especially in fields where human judgement and ethical considerations are critical....The precision of AI predictions and analyses can be seductive, leading many to conflate high accuracy with truth. However, this conflation is misleading and potentially dangerous, as AI systems increasingly influence critical aspects of our lives, from finance to healthcare to legal judgments."
    "For instance, an AI model predicting stock market trends with high accuracy might correctly forecast price movements based on historical data patterns and real-time market analysis. Yet, this accuracy does not assure truthfulness.
     "The model’s predictions can be entirely correct within the scope of its data while being untrue due to external factors it cannot predict or account for, such as an unexpected political event or a company’s internal scandal caused by information asymmetry."

     Also published as Never Assume That the Accuracy of Artificial Intelligence Information Equals the Truth (United Nations University, UNU Centre, 7-18-2024)

     "The distinction between accuracy and truthfulness is particularly pronounced in the context of AI predictions about workplace performance. Consider an AI system tasked with determining employee productivity. It may analyse metrics such as hours logged, emails sent, and tasks completed to forecast future performance accurately.

     "However, while these metrics are accurate, they do not fully capture an employee’s capabilities, motivations, or potential issues. An employee on the verge of burnout may be highly productive today. However, their performance may suffer significantly if their mental health deteriorates — a factor AI may be unable to predict."

What is an Internet bot? (Wikipedia) An Internet bot, web robot, robot or simply bot, is a software application that runs automated tasks over the Internet, usually with the intent to imitate human activity on the Internet, such as messaging, on a large scale.
What is a bot: types and functions (Digital Guide IONOS UK, 11-16-21) What is a bot, what functions can it perform, and what does its structure consist of? Learn about Rule-based bots and self-learning bots, the different types of good bots, the different types of malware bots, and how they work. What types of attacks can botnets perform?
ChatGPT (AI) This chatbot launched by OpenAI in November 2022 is being used to write novels, among other things. It has a problem with factual accuracy. See also section on this website on ChatGPT (AI)

[Back to Top]

 

Writers/journalists/creators and artificial intelligence and issues like copyright protection, plagiarism, flaws and inaccuracies, and how frank creators must be about using AI
FAQs on the Authors Guild’s Positions and Advocacy Around Generative AI
AI meets a credit union: Is it real or a mirage? (Robert McGarvey, CUInsight, undated)

    "One big question that has held back many credit unions from making aggressive use of AI tools is fear that the data they input into an AI tool suddenly is in the wild and out of the institution’s control. Drake says there are simple fixes....

   "How can you know if any of your AI efforts are worth the bother? As you implement AI trials, says Drake, keep scorecards and be ready to move off projects that aren’t generating tangible results."
A crash course for journalists on AI and machine learning (Video, 51 min., International Journalism Festival, 4-7-22)
Denied by AI: How Medicare Advantage plans use algorithms to cut off care for seniors in need (Casey Ross and Bob Herman, Stat Investigation, 3-13-23, a Pulitzer finalist for a series, For exposing how UnitedHealth Group, the nation's largest health insurer, used an unregulated algorithm to override clinicians' judgments and deny care, highlighting the dangers of AI use in medicine. Read the full series.

The AI is eating itself (Casey Newton, Platformer, 6-27-23) Boy, is this post packed with info and insights. The third paragraph alone kept me online for an extra half-hour, following links to more good reading.


CNET Is Quietly Publishing Entire Articles Generated By AI (Frank Landymore, The Byte, 1-15-23) "This article was generated using automation technology," reads a dropdown description.The articles are published under the unassuming appellation of "CNET Money Staff," and encompass topics like "Should You Break an Early CD for a Better Rate?" or "What is Zelle and How Does It Work?" That byline obviously does not paint the full picture, and so your average reader visiting the site likely would have no idea that what they're reading is AI-generated.

     (H/T to Jon Christian for links to this and next four pieces)
Google Is Using A.I. to Answer Your Health Questions. Should You Trust It? (Talya Minsberg, NY Times, 5-31-24) Experts say the new feature may offer dubious advice in response to personal health queries.
AI Robs My Students of the Ability to Think (Alex Green, WSJ, 8-12-25) They report that they find their ability to write, speak and conduct basic inquiry is slipping away.
The Age of AI in the Newsroom (Lyndsey Jones, Women in News, 5-25)
The One Danger That Should Unite the U.S. and China (Thomas L.Friedman, NY Times, 9-2-25) Friedman argues that this prospect of rogue actors using A.I. to cause global disruption will force the United States and China to establish a mechanism for trust that spans both superpowers, and other countries that choose to join in. Friedman and Mundie contend that the United States and China should find a way to build into A.I. devices what Mundie calls “trust adjudicators,” a kind of internal referee that monitors the values inherent in any machine-driven action. This referee would rely on shared moral and ethical tenets that prohibit stealing, cheating, killing and so on.
CNET's Article-Writing AI Is Already Publishing Very Dumb Errors (Jon Christian, The Byte, Futurism,1-29-23) CNET is now letting an AI write articles for its site. The problem? It's kind of a moron.
Sports Illustrated Published Articles by Fake, AI-Generated Writers ( Maggie Harrison Dupré, Futurism, 11-27-23) We asked them about it — and they deleted everything.
CNET's AI Journalist Appears to Have Committed Extensive Plagiarism (Jon Christian, The Byte, Futurism 1-23-23) CNET's AI-written articles aren't just riddled with errors. They also appear to be substantially plagiarized.
BuzzFeed Is Quietly Publishing Whole AI-Generated Articles, Not Just Quizzes (Noor Al-Sibai and Jon Christian, Futurism, 3-30-23) These read like a proof of concept for replacing human writers--lots of repetition of pet phrases.

[Back to Top]


The AI takeover of Google Search starts now (David Pierce, The Verge, 5-10-23) Google is moving slowly and carefully to make AI happen. Maybe too slowly and too carefully for some people. But if you opt in, a whole new search experience awaits.
Google Rolls Back A.I. Search Feature After Flubs and Flaws (Nico Grant, NY Times, 6-1-24) Google appears to have turned off its new A.I. Overviews for a number of searches as it works to minimize errors.

AI, Machine Learning and Robotics: Privacy, Security Issues (Marianne Kolbasuk McGee, GovInfoSecurity.com, 12-6-19) Attorney Stephen Wu discusses the challenges. What kinds of security management procedures are touching surgical robots and AI systems? How do we make communications secure from one point to another, if AI can access patients medical records? How do we protect against cybercrime?
Benefits & Risks of Artificial Intelligence (Future of Life Institute) “Everything we love about civilization is a product of intelligence, so amplifying our human intelligence with artificial intelligence has the potential of helping civilization flourish like never before – as long as we manage to keep the technology beneficial."~ Max Tegmark, President of the Future of Life Institute
The Top Myths About Advanced AI (Future of Life Institute)


AI is killing the old web, and the new web struggles to be born (James Vincent, The Verge, 6-26-23) Generative AI models are changing the economy of the web, making it cheaper to generate lower-quality content. We’re just beginning to see the effects of these changes.
New Tool Could Poison DALL-E and Other AI to Help Artists (Josh Hendrickson, PC Mag, 10-27-23) Researchers from the University of Chicago introduce a new tool, dubbed Nightshade, that can 'poison' AI and ruin its data set, leading it to generate inaccurate results.
---This new data poisoning tool lets artists fight back against generative AI (Melissa Heikkilä, MIT Technology Review, 10-23-23) The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
Godfathers of AI Have a New Warning: Get a Handle on the Tech Before It's Too Late (Joe Hindy, PC Mag, 10-24-23) Two dozen experts warn that 'AI systems could rapidly come to outperform humans in an increasing number of tasks [and] pose a range of societal-scale risks.'

[Back to Top]


How AP Investigated the Global Impacts of AI (Garance Burke, Pulitzer Center, 6-21-23) "When my editor Ron Nixon and I realized that too few journalists had gotten trained on how these complex statistical models work, we devised internal workshops to build capacity in AI accountability reporting....No surprise, FOIA and its equivalents are an imperfect tool and rarely yield raw code. Little transparency about the use of AI tools by government agencies can mean public knowledge is severely restricted, even if records are disclosed.Viewing predictive and surveillance tools in isolation doesn’t capture their full global influence.The purchase and implementation of such technologies isn’t necessarily centralized. Individual state and local agencies may use a surveillance or predictive tool on a free trial basis and never sign a contract. And even if federal agencies license a tool intending to implement it nationwide, that isn’t always rolled out the same way in each jurisdiction."
AI is being used to generate whole spam sites (James Vincent, The Verge, 5-2-23) A report identified 49 sites that use AI tools like ChatGPT to generate cheap and unreliable content. Experts warn the low costs of producing such text incentivizes the creation of these sites.
The semiautomated social network is coming (James Vincent, The Verge, 3-10-23) LinkedIn announced last week it’s using AI to help write posts for users to chat about. Snap has created its own chatbot, and Meta is working on AI ‘personas.’ It seems future social networks will be increasingly augmented by AI.

[Back to Top]

 

 

AI Art for Authors: Which Program to Use (Jason Hamilton, Kindlepreneur, 12-9-22) There are dozens of AI art tools out there, many with unique specialties. But most would agree that three stand up above the rest:
    Midjourney
    Dall-E 2
    Stable Diffusion.

Hamilton discusses how to access them, what they cost, how they can be useful, and why he recommends them (or not, and what for, illustrated), with a final section on AI art's copyright problems: Are they copying exist art on the collage principle (a little here, a little there), or are they facing legal and copyright problems?
Artificial Labor (Ed Zitron's Where's Your Ed At, 5-12-23) With the 2023 Writers Guild of America strike, "we are entering a historical battle between actual labor – those who create value in organizations and the world itself – and the petty executive titans that believe that there are no true value creators in society, only “ideas people” and those interchangeable units who carry out their whims...The television and film industries are controlled by exceedingly rich executives that view entertainment as something that can (and should) be commoditized and traded, rather than fostered and created by human beings. While dialogue eventually has to be performed by a human being, the Alliance of Motion Picture and Television Producers clearly views writing (and writers) as more of a fuel that can be used to create products rather than something unique or special....entertainment’s elites very clearly want to be able to use artificial intelligence to write content."

[Back to Top]


The Fanfic Sex Trope That Caught a Plundering AI Red-Handed (Rose Eveleth, Wired, 5-15-23) Sudowrite, a tool that uses OpenAI’s GPT-3, was found to have understood a sexual act known only to a specific online community of Omegaverse writers. The data set that was used to train most (all?) text-generative AI includes sex acts found only in the raunchiest of fanfiction. "What if your work exists in a kind of in-between space—not work that you make a living doing, but still something you spent hours crafting, in a community that you care deeply about? And what if, within that community, there was a specific sex trope that would inadvertently unmask how models like ChatGPT scrape the web—and how that scraping impacts the writers who created it. (H/T Nate Hoffelder, Morning Coffee)
AI art tools Stable Diffusion and Midjourney targeted with copyright lawsuit (James Vincent, The Verge, 1-16-23) The suit claims generative AI art tools violate copyright law by scraping artists’ work from the web without their consent. Butterick and Saveri are currently suing Microsoft, GitHub, and OpenAI in a similar case involving the AI programming model CoPilot, which is trained on lines of code collected from the web.
The lawsuit that could rewrite the rules of AI copyright (James Vincent, The Verge, 11-8-22) Microsoft, its subsidiary GitHub, and its business partner OpenAI have been targeted in a proposed class action lawsuit alleging that the companies’ creation of AI-powered coding assistant GitHub Copilot relies on ---“software piracy on an unprecedented scale.”

---"Someone comes along and says, 'Let's socialize the costs and privatize the profits.'"

---“This is the first class-action case in the US chal­leng­ing the train­ing and out­put of AI sys­tems. It will not be the last.”
The scary truth about AI copyright is nobody knows what will happen next (James Vincent, The Verge, 11-15-22) The last year has seen a boom in AI models that create art, music, and code by learning from others’ work. But as these tools become more prominent, unanswered legal questions could shape the future of the field.
`
Wendy’s to test AI chatbot that takes your drive-thru order (St. Louis-Post Dispatch) (Erum Salam, The Guardian, 5-10-23) 'The Guardian' reports that Wendy's is ready to roll out an artificial-intelligence-powered chatbot capable of taking customers' orders. Pilot program ‘seeks to take the complexity [the humans] out of the ordering process’
In a Reminder of AI's Limits, ChatGPT Fails Gastro Exam (Michael DePeau-Wilson, MedPage Today, 5-22-23) Both versions of the AI model failed to achieve the 70% accuracy threshold to pass.
Some companies are already replacing workers with ChatGPT, despite warnings it shouldn’t be relied on for ‘anything important’ (Trey Williams, Fortune, 2-25-23)
‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead (NY Times, 5-1-23) For half a century, Geoffrey Hinton nurtured the technology at the heart of chatbots like ChatGPT. Now he worries it will cause serious harm.
Teaching A.I. Systems to Behave Themselves (Cade Metz, NY Times, 8-13-17)
AI Safety for Fleshy Humans (Nicky Case & Hack Club) The core ideas of AI & AI Safety* — explained in a friendly, accessible, and slightly opinionated way.

[Back to Top]

 

On the plus or minus side:
Smarter health: How AI is transforming health care (Dorey Scheimer, Meghna Chakrabarti, and Tim Skoog, On Point, first piece in a Smarter Health series, WBUR radio, 5-27-22, with transcript) Guests Dr. Ziad Obermeyer (associate professor of health policy and management at the University of California, Berkeley School of Public Health. Emergency medicine physician) and Richard Sharp (director of the biomedical ethics research program at the Mayo Clinic, @MayoClinic) explore the potential of AI in health care — from predicting patient risk, to diagnostics, to just helping physicians make better decisions.
Artificial Intelligence Is Primed to Disrupt Health Care Industry (Ben Hernandez, ETF Trends, 7-12-15) Artificial intelligence (AI) is one of the prime technologies leading the wave of disruption that is going on within the health care sector. Recent studies have shown that AI technology can outperform doctors when it comes to cancer screenings and disease diagnoses. In particular, this could mean specialists such as radiologists and pathologists could be replaced by AI technology. Whether society is ready for it or not, robotics, artificial intelligence (AI), machine learning, or any other type of disruptive technology will be the next wave of innovation.
How will large language models (LLMs) change the world? (Dynomight Internet Newsletter, The Browser, 12-8-22) Think about historical analogies for 'large language models': the ice trade and freezers; chess humans and chess AIs; farmers and tractors; horses and railroads; swords and guns; swordfighting and fencing; artisanal goods and mass production; site-built homes and pre-manufactured homes; painting and photography; feet and Segways; gull-wing and scissor doors; sex and pornography; human calculators and electronic calculators.
---

[Back to Top]


Artificial You: AI and the Future of Your Mind by Susan Schneider. Can robots really be conscious? Is the mind just a program? "Schneider offers sophisticated insights on what is perhaps the number one long-term challenge confronting humanity."―Martin Rees
Top 9 ethical issues in artificial intelligence (Julia Bossmann, World Economic Forum, 10-21-16) In brief: unemployment, income inequality, humanity, artificial stupidity (mistakes), racist robots (AI bias), security (safety from adversaries), evil genies (unintended consequences), singularity, robot rights. She makes interesting points!
AI in the workplace: Everything you need to know (Nick Heath, ZDNet, 6-29-18) How artificial intelligence will change the world of work, for better and for worse. Bots and virtual assistants, IoT and analytics, and so on.
What is the IoT? Everything you need to know about the Internet of Things right now (Steve Ranger, ZDNet, 1-19-18) The Internet of Things explained: What the IoT is, and where it's going next. "Pretty much any physical object can be transformed into an IoT device if it can be connected to the internet and controlled that way. A lightbulb that can be switched on using a smartphone app is an IoT device, as is a motion sensor or a smart thermostat in your office or a connected streetlight. An IoT device could be as fluffy as a child's toy or as serious as a driverless truck, or as complicated as a jet engine that's now filled with thousands of sensors collecting and transmitting data. At an even bigger scale, smart cities projects are filling entire regions with sensors to help us understand and control the environment."
Beyond the Hype of Machine Learning (Free download, GovLoop ebook, 15-minute read) Read about machine learning's impact in the public sector, the 'how' and 'why' of artificial intelligence (AI), and how the Energy Department covers the spectrum of AI usage.

[Back to Top]


Can Artificial Intelligence Keep Your Home Secure? (Paul Sullivan, NY Times, 1-29-18) Security companies are hoping to harness the potential of A.I., promising better service at lower prices. But experts say there are risks.
What will our society look like when Artificial Intelligence is everywhere? (Stephan Talty, Smithsonan, April 2018) Will robots become self-aware? Will they have rights? Will they be in charge? Here are five scenarios from our future dominated by AI.
Amazon Is Latest Tech Giant to Face Staff Backlash Over Government Work (Jamie Condliffe, NY times, 6-22-18) Tech "firms have built artificial intelligence and cloud computing systems that governments find attractive. But as these companies take on lucrative contracts to furnish state and federal agencies with these technologies, they’re facing increasing pushback  Read More 

2 Comments
Post a comment