icon caret-left icon caret-right instagram pinterest linkedin facebook twitter goodreads question-circle facebook circle twitter circle linkedin circle instagram circle goodreads circle pinterest circle

Writers and Editors (RSS feed)

Who to use for audiobook distribution

I can produce an audiobook myself, but will any of the audiobook companies agree to distribute it, or must it be produced "in-house"?

 

Guest post by Maggie Lynch

 

If your audiobook is being produced by an audio publisher, they have their own distribution sources.

 

If you are doing the audiobook yourself or hiring a narrator and you will get the physical files to distribute, I would recommend Findaway Voices. They have the largest audio distribution market. The larger company, Findaway, merged with Spotify in 2022 so their distribution is even stronger and they are trying new marketing options to capitalize on that relationship.

 

Another good audio distribution service is Author's Republic. They have a great reputation for clear communication and monthly royalty payments and they distribute to about 50 places. Every author I know who uses them loves them.

 

Publish Drive, Lantern Audio (used to be Listen Up), and BookBaby also have good reputations, distribute books to the big guys, and then use Findaway to reach the rest of the places. My thought is, why not just go with Findaway to begin with.

 

You can load directly to the big places like Amazon (Audible),  Kobo, Barnes & Noble, and Apple, but I personally find that too time- consuming, and if you want to make changes in metadata—cover, pricing, description, etc.—it's a lot of places to go to. I'd rather have a book in one place with wide distribution so I can make changes at any time and know it will populate to all the distributors they cover.


You can learn more about Maggie Lynch here.

Be the first to comment

What are the differences between IngramSpark and Lightning Source for self-publishing authors?

Answer from David Kudler, publisher at Stillpoint Digital Press:

Lightning Source (LSI) and IngramSpark (IS) are both portals offered by Ingram Content Group (the world's largest book distributor) to access their print-on-demand (POD) and distribution services. They are very similar, but have a few minor differences.
    

LSI is older — it was a standalone company founded in 1996 and acquired by Ingram about fifteen years ago. It was intended to offer publishers of all sizes POD services — especially useful for low-selling backlist titles, but also for popular titles that outsold their print runs.      

 

Ingram started IS in 2013 specifically as a service to the burgeoning micro- and self-publishing community — allowing them access to the same POD and worldwide distribution as major publishers, and allowing them to release their books in print without the high-risk investment of printing thousands of copies of their books (and then having to store them and ship them).

 

They both use the same worldwide network of printing plants. They both offer the same access to Ingram's distribution network. The print quality is the same.

There are differences between the two portals, reflecting their different origins.
      LSI allows the publisher to offer discounts from the industry-standard 55% (which ends up being a 40% discount to retailers) down to 20% (8% retail discount). IS has a set 55% discount. (Some publishers like to be able to lower the discount in order to drop the price for certain books. It generally means you won't get the book into any brick-and-mortar stores or libraries, but they may not be your target anyway.)
     IS offers ebook distribution (they take a higher than standard 20% cut); LSI is print-only.
     LSI charges an annual $12 "catalog" fee for each edition, and has slightly higher fees for revising your books (I avoid the setup fees as a member of IBPA).
     LSI offers printing discounts on titles that sell well (BoLT).
     As of this month, ScribeCount (the online sales tracking service) imports sales from IS, but you have to hand-enter them for LSI. Other than that, they're essentially the same.
     I use LSI, but only because I started before IS existed. I have played with "short" discounts, but mostly stick to the standard 55%. The ScribeCount situation has me seriously considering switching, but… if it ain't broke, don't fix it.

 

     For what it's worth, I strongly recommend that my clients use IS rather than LSI to distribute to the book trade — it's easier to set up, has slightly lower costs, and most of the high-end features of LSI aren't going to come into play. Then we use Amazon's KDP Print to distribute to Jeff Bezos's domain.

   
    Ebooks, of course, are another issue altogether.

 

    Thanks for permission to reprint this useful post to David Kudler.

1 Comments
Post a comment

Artificial intelligence (AI) What problems does it bring? solve? What the heck is a bot?

This replaces an early version of this post that appeared in June 2018. 

Updated 10-30-23.

 

The Basics about AI
What is AI? Everything you need to know about Artificial Intelligence (Nick Heath, ZDNet, 2-2-18) An executive guide to artificial intelligence, from machine learning and general AI to neural networks.

What is an Internet bot? (Wikipedia) An Internet bot, web robot, robot or simply bot, is a software application that runs automated tasks over the Internet, usually with the intent to imitate human activity on the Internet, such as messaging, on a large scale.
What is a bot: types and functions (Digital Guide IONOS UK, 11-16-21) What is a bot, what functions can it perform, and what does its structure consist of? Learn about Rule-based bots and self-learning bots, the different types of good bots, the different types of malware bots, and how they work. What types of attacks can botnets perform?
ChatGPT (AI) This chatbot launched by OpenAI in November 2022 is being used to write novels, among other things. It has a problem with factual accuracy. See also section on this website on ChatGPT (AI)

[Back to Top]

 

Writers/journalists/creators and artificial intelligence and issues like copyright protection, flaws and inaccuracies, and how frank creators must be about using AI
FAQs on the Authors Guild’s Positions and Advocacy Around Generative AI
A crash course for journalists on AI and machine learning (Video, 51 min., International Journalism Festival, 4-7-22)

The AI is eating itself (Casey Newton, Platformer, 6-27-23) Boy, is this post packed with info and insights. The third paragraph alone kept me online for an extra half-hour, following links to more good reading.
The AI takeover of Google Search starts now (David Pierce, The Verge, 5-10-23) Google is moving slowly and carefully to make AI happen. Maybe too slowly and too carefully for some people. But if you opt in, a whole new search experience awaits.
AI is killing the old web, and the new web struggles to be born (James Vincent, The Verge, 6-26-23) Generative AI models are changing the economy of the web, making it cheaper to generate lower-quality content. We’re just beginning to see the effects of these changes.
New Tool Could Poison DALL-E and Other AI to Help Artists (Josh Hendrickson, PC Mag, 10-27-23) Researchers from the University of Chicago introduce a new tool, dubbed Nightshade, that can 'poison' AI and ruin its data set, leading it to generate inaccurate results.
---This new data poisoning tool lets artists fight back against generative AI (Melissa Heikkilä, MIT Technology Review, 10-23-23) The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
Godfathers of AI Have a New Warning: Get a Handle on the Tech Before It's Too Late (Joe Hindy, PC Mag, 10-24-23) Two dozen experts warn that 'AI systems could rapidly come to outperform humans in an increasing number of tasks [and] pose a range of societal-scale risks.'
How AP Investigated the Global Impacts of AI (Garance Burke, Pulitzer Center, 6-21-23) "When my editor Ron Nixon and I realized that too few journalists had gotten trained on how these complex statistical models work, we devised internal workshops to build capacity in AI accountability reporting....No surprise, FOIA and its equivalents are an imperfect tool and rarely yield raw code. Little transparency about the use of AI tools by government agencies can mean public knowledge is severely restricted, even if records are disclosed.Viewing predictive and surveillance tools in isolation doesn’t capture their full global influence.The purchase and implementation of such technologies isn’t necessarily centralized. Individual state and local agencies may use a surveillance or predictive tool on a free trial basis and never sign a contract. And even if federal agencies license a tool intending to implement it nationwide, that isn’t always rolled out the same way in each jurisdiction."
AI is being used to generate whole spam sites (James Vincent, The Verge, 5-2-23) A report identified 49 sites that use AI tools like ChatGPT to generate cheap and unreliable content. Experts warn the low costs of producing such text incentivizes the creation of these sites.
The semiautomated social network is coming (James Vincent, The Verge, 3-10-23) LinkedIn announced last week it’s using AI to help write posts for users to chat about. Snap has created its own chatbot, and Meta is working on AI ‘personas.’ It seems future social networks will be increasingly augmented by AI.

[Back to Top]

 

 

AI Art for Authors: Which Program to Use (Jason Hamilton, Kindlepreneur, 12-9-22) There are dozens of AI art tools out there, many with unique specialties. But most would agree that three stand up above the rest:
    Midjourney
    Dall-E 2
    Stable Diffusion.

Hamilton discusses how to access them, what they cost, how they can be useful, and why he recommends them (or not, and what for, illustrated), with a final section on AI art's copyright problems: Are they copying exist art on the collage principle (a little here, a little there), or are they facing legal and copyright problems?
Artificial Labor (Ed Zitron's Where's Your Ed At, 5-12-23) With the 2023 Writers Guild of America strike, "we are entering a historical battle between actual labor – those who create value in organizations and the world itself – and the petty executive titans that believe that there are no true value creators in society, only “ideas people” and those interchangeable units who carry out their whims...The television and film industries are controlled by exceedingly rich executives that view entertainment as something that can (and should) be commoditized and traded, rather than fostered and created by human beings. While dialogue eventually has to be performed by a human being, the Alliance of Motion Picture and Television Producers clearly views writing (and writers) as more of a fuel that can be used to create products rather than something unique or special....entertainment’s elites very clearly want to be able to use artificial intelligence to write content."

[Back to Top]


The Fanfic Sex Trope That Caught a Plundering AI Red-Handed (Rose Eveleth, Wired, 5-15-23) Sudowrite, a tool that uses OpenAI’s GPT-3, was found to have understood a sexual act known only to a specific online community of Omegaverse writers. The data set that was used to train most (all?) text-generative AI includes sex acts found only in the raunchiest of fanfiction. "What if your work exists in a kind of in-between space—not work that you make a living doing, but still something you spent hours crafting, in a community that you care deeply about? And what if, within that community, there was a specific sex trope that would inadvertently unmask how models like ChatGPT scrape the web—and how that scraping impacts the writers who created it. (H/T Nate Hoffelder, Morning Coffee)
AI art tools Stable Diffusion and Midjourney targeted with copyright lawsuit (James Vincent, The Verge, 1-16-23) The suit claims generative AI art tools violate copyright law by scraping artists’ work from the web without their consent. Butterick and Saveri are currently suing Microsoft, GitHub, and OpenAI in a similar case involving the AI programming model CoPilot, which is trained on lines of code collected from the web.
The lawsuit that could rewrite the rules of AI copyright (James Vincent, The Verge, 11-8-22) Microsoft, its subsidiary GitHub, and its business partner OpenAI have been targeted in a proposed class action lawsuit alleging that the companies’ creation of AI-powered coding assistant GitHub Copilot relies on ---“software piracy on an unprecedented scale.”

---"Someone comes along and says, 'Let's socialize the costs and privatize the profits.'"

---“This is the first class-action case in the US chal­leng­ing the train­ing and out­put of AI sys­tems. It will not be the last.”
The scary truth about AI copyright is nobody knows what will happen next (James Vincent, The Verge, 11-15-22) The last year has seen a boom in AI models that create art, music, and code by learning from others’ work. But as these tools become more prominent, unanswered legal questions could shape the future of the field.
`
Wendy’s to test AI chatbot that takes your drive-thru order (St. Louis-Post Dispatch) (Erum Salam, The Guardian, 5-10-23) 'The Guardian' reports that Wendy's is ready to roll out an artificial-intelligence-powered chatbot capable of taking customers' orders. Pilot program ‘seeks to take the complexity [the humans] out of the ordering process’
In a Reminder of AI's Limits, ChatGPT Fails Gastro Exam (Michael DePeau-Wilson, MedPage Today, 5-22-23) Both versions of the AI model failed to achieve the 70% accuracy threshold to pass.
Some companies are already replacing workers with ChatGPT, despite warnings it shouldn’t be relied on for ‘anything important’ (Trey Williams, Fortune, 2-25-23)
‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead (NY Times, 5-1-23) For half a century, Geoffrey Hinton nurtured the technology at the heart of chatbots like ChatGPT. Now he worries it will cause serious harm.
Teaching A.I. Systems to Behave Themselves (Cade Metz, NY Times, 8-13-17)

[Back to Top]

 

On the plus or minus side:
Smarter health: How AI is transforming health care (Dorey Scheimer, Meghna Chakrabarti, and Tim Skoog, On Point, first piece in a Smarter Health series, WBUR radio, 5-27-22, with transcript) Guests Dr. Ziad Obermeyer (associate professor of health policy and management at the University of California, Berkeley School of Public Health. Emergency medicine physician) and Richard Sharp (director of the biomedical ethics research program at the Mayo Clinic, @MayoClinic) explore the potential of AI in health care — from predicting patient risk, to diagnostics, to just helping physicians make better decisions.
Artificial Intelligence Is Primed to Disrupt Health Care Industry (Ben Hernandez, ETF Trends, 7-12-15) Artificial intelligence (AI) is one of the prime technologies leading the wave of disruption that is going on within the health care sector. Recent studies have shown that AI technology can outperform doctors when it comes to cancer screenings and disease diagnoses. In particular, this could mean specialists such as radiologists and pathologists could be replaced by AI technology. Whether society is ready for it or not, robotics, artificial intelligence (AI), machine learning, or any other type of disruptive technology will be the next wave of innovation.
How will large language models (LLMs) change the world? (Dynomight Internet Newsletter, The Browser, 12-8-22) Think about historical analogies for 'large language models': the ice trade and freezers; chess humans and chess AIs; farmers and tractors; horses and railroads; swords and guns; swordfighting and fencing; artisanal goods and mass production; site-built homes and pre-manufactured homes; painting and photography; feet and Segways; gull-wing and scissor doors; sex and pornography; human calculators and electronic calculators.

[Back to Top]


Artificial You: AI and the Future of Your Mind by Susan Schneider. Can robots really be conscious? Is the mind just a program? "Schneider offers sophisticated insights on what is perhaps the number one long-term challenge confronting humanity."―Martin Rees
Top 9 ethical issues in artificial intelligence (Julia Bossmann, World Economic Forum, 10-21-16) In brief: unemployment, income inequality, humanity, artificial stupidity (mistakes), racist robots (AI bias), security (safety from adversaries), evil genies (unintended consequences), singularity, robot rights. She makes interesting points!
AI in the workplace: Everything you need to know (Nick Heath, ZDNet, 6-29-18) How artificial intelligence will change the world of work, for better and for worse. Bots and virtual assistants, IoT and analytics, and so on.
What is the IoT? Everything you need to know about the Internet of Things right now (Steve Ranger, ZDNet, 1-19-18) The Internet of Things explained: What the IoT is, and where it's going next. "Pretty much any physical object can be transformed into an IoT device if it can be connected to the internet and controlled that way. A lightbulb that can be switched on using a smartphone app is an IoT device, as is a motion sensor or a smart thermostat in your office or a connected streetlight. An IoT device could be as fluffy as a child's toy or as serious as a driverless truck, or as complicated as a jet engine that's now filled with thousands of sensors collecting and transmitting data. At an even bigger scale, smart cities projects are filling entire regions with sensors to help us understand and control the environment."
Beyond the Hype of Machine Learning (Free download, GovLoop ebook, 15-minute read) Read about machine learning's impact in the public sector, the 'how' and 'why' of artificial intelligence (AI), and how the Energy Department covers the spectrum of AI usage.

[Back to Top]


Can Artificial Intelligence Keep Your Home Secure? (Paul Sullivan, NY Times, 1-29-18) Security companies are hoping to harness the potential of A.I., promising better service at lower prices. But experts say there are risks.
What will our society look like when Artificial Intelligence is everywhere? (Stephan Talty, Smithsonan, April 2018) Will robots become self-aware? Will they have rights? Will they be in charge? Here are five scenarios from our future dominated by AI.
Amazon Is Latest Tech Giant to Face Staff Backlash Over Government Work (Jamie Condliffe, NY times, 6-22-18) Tech "firms have built artificial intelligence and cloud computing systems that governments find attractive. But as these companies take on lucrative contracts to furnish state and federal agencies with these technologies, they’re facing increasing pushback  Read More 

1 Comments
Post a comment