HBM149: The Daily Blast [Neutrinowatch]

Neutrinowatch logo. Designed by Jeff Emtman

 

Please note: This is a dynamically generated podcast episode. It changes every day.

This is a short episode from the new show Neutrinowatch: A Daily Generative Podcast.  Each episode of Neutrinowatch changes a lil’ bit every day.  

This episode, The Daily Blast, features two computerized voices (Wendy and Ivan), who share the day’s news. 

To get new versions of this episode, you’ll need to either stream the audio in your podcast app/web browser, or just delete and re-download the episode.  It’s updated every 24 hours.  Note: Due to Spotify’s policy of downloading and rehosting podcast audio, this episode won’t work very well on Spotify.  Most other podcast apps should handle it well though. 

Neutrinowatch is a project of Jeff Emtman (Here Be Monsters’ host), and Martin Zaltz Austick (Answer Me This, Song By Song, Pale Bird and others). 

If you’d like to know more about generative podcasting and the story of Neutrinowatch, listen to So What Exactly is Episode 149? and Jeff’s blog post called The Start of Generative Podcasting?

Neutrinowatch is available on most podcast apps, and as of publish date, there’s currently 6.5 episodes available.  Each updates daily. 

Producers: Jeff Emtman and Martin Zaltz Austwick

Music:The Black Spot

 

So What Exactly is Episode 149?

Image by Jeff Emtman.

 

Episode 149 is an odd duck for sure.  It changes every day due to some magical coding trickery that is happening behind the scenes. 

That episode is a part of a bigger project, a new podcast project that’s potentially the first of its kind.  It’s called Neutrinowatch, and every day, each episode is regenerated with new content. 

This is a conversation between Jeff Emtman (Here Be Monsters’ host), and Martin Zaltz Austick (Answer Me This, Song By Song, Pale Bird and others) about the hows and whys of Neutrinowatch: A Daily Generative Podcast (available now on most podcast apps 😉)

 

HBM148: Early Attempts at Summoning Dream Beings

Image by Jeff Emtman.

 

As a teenager, HBM host Jeff Emtman fell asleep most nights listening to Coast To Coast AM, a long running talk show about the world’s weirdnesses.   One of the guests stuck out though; one who spoke on his experiences with lucid dreaming.  He’d learned how to conjure supernatural entities and converse with his subconscious.  

Lucid dreams are dreams where the dreamer knows they’re asleep.  Some sleepers become lucid completely at random, but lucid dream training can drastically increase the frequency of their occurrence.

Months ago, Jeff put out a call for dream prompts on social media.  He asked if anyone had questions for an all-knowing being to be conjured in a forthcoming lucid dream.  Some of the questions are heard in this episode.  

While training for this episode, Jeff used two approaches to trigger lucid dreams.  The first was an audio recorder by the bedside.  Each morning, Jeff recorded his dreams (lucid or not).  The second method was a series of “wakefulness checks” throughout each day, stopping at random times to test reality, and to make a determination on whether he’s currently awake or asleep.  This tactic is useful as it may eventually trigger the same behaviour in a dream.  

In this episode, Jeff attempts to lucid dream to answer listener questions, but finds the progress slower than he hoped.  

Here Be Monsters is an independent podcast that is funded entirely by individual sponsors and donors.  You can become a donor at patreon.com/HBMpodcast

Producer: Jeff Emtman
Music: The Black Spot, Phantom Fauna, and Serocell.

 
 
Sleep With Me Expanded..jpg

Sleep With Me is a podcast that helps you fall asleep.

Host Drew Ackerman tells tangential stories, reads old catalogues, recaps old Charlie Brown specials and does other calming things all in pursuit of slowing your mind down and letting you drift off to sleep more peacefully.

Subscribe to Sleep With Me on any podcast app.

Jeff wearing his favorite Sleep With Me shirt. This shirt elicits compliments whenever its worn 🐏💖

HBM147: Chasing Tardigrades

Image by Jeff Emtman. Kaleidoscope collage of moss microscopy photos.

 

With much of the world shut down over the last year, HBM host Jeff Emtman started wondering if there were smaller venues where the world still felt open. 

In this episode, Jeff interviews Chloé Savard of the Instagram microscopy page @tardibabe about the joy of looking at small things, and whether it’s possible to find beauty in things you don’t understand.  

Chloé also gives Jeff instructions for finding tardigrades by soaking moss in water and squeezing out the resulting juice onto slides.

Producer: Jeff Emtman
Music: The Black Spot

 

Jeff’s Microscopy Pics

Student microscope and smartphone. Click to enlarge and read descriptions.

 
 
Pod People.jpg

Sponsor: Pod People

Pod People is an audio production and staffing agency with a community of 1,000+ producers, editors, engineers, sound designers and more.  Pod People is free to join. After a short onboarding process, Pod People will send you clients and work opportunities that are a good match for your specific skills and interests.

Theodora is @hypo_inspo

Image by Jeff Emtman

 

A brief follow-up to last episode: you can now follow our AI-powered friend Theodora on Twitter! She tweets several times a day, giving bad advice, good advice, and some strange poetry. Her account’s called Hypothetical Inspiration. Give her a follow.

 

HBM146: Theodora

Computer generated text projected on a computer generated waves. Image by Jeff Emtman.

 

How does a computer learn to speak with emotion and conviction? 

Language is hard to express as a set of firm rules.  Every language rule seems to have exceptions and the exceptions have exceptions etcetera.  Typical, “if this then that” approaches to language just don’t work.  There’s too much nuance. 

But each generation of algorithms gets closer and closer. Markov chains were invented in the 1800’s and rely on nothing more than basic probabilities.  It’s a simple idea, just look at an input (like a book), and learn the order in which words tend to appear.  With this knowledge, it’s possible to generate new text in the same style of the input, just by looking up the probability of words that are likely to follow each other.  It’s simple and sometimes half decent, but not effective for longer outputs as this approach tends to lack object permanence and generate run-on sentences. Markov models are  used today in predictive text phone keyboards, but can also be used to predict weather, stock prices, etc. 

There’ve been plenty of other approaches to language generation (and plenty of mishaps as well).  A notable example is CleverBot, which chats with humans and heavily references its previous conversations to generate its results.  Cleverbot’s chatting can sometimes be eerily human, perfectly regurgitating slang, internet abbreviations, obscure jokes.  But it’s kind of a sly trick at the end of the day, and, as with Markov chains, Cleverbot’s AI still doesn’t always grasp grammar and object permanence. 

In the last decade or two, there’s been an explosion in the abilities of a different kind of AI, the Artificial Neural Network.  These “neural nets” are modelled off the way that brains work, running stimuli through their “neurons” and reinforcing paths that yield the best results. 

The outputs are chaotic until they are properly “trained.” But as the training reaches its optimal point, a model emerges that can efficiently process incoming data and spit out output that incorporates the same kinds of nuance, strangeness, and imperfection that you expect to see in the natural world.  Like Markov chains, neural nets have a lot of applications outside language too. 

But these neural networks are complicated, like a brain.  So complicated, in fact, that few try to dissect these trained models to see how they’re actually working.  And tracing it backwards is difficult, but not impossible

If we temporarily ignore the real risk that sophisticated AI language models pose for societies attempting to separate truth from fiction these neural net models allow for some interesting possibilities, namely extracting the language style of a large body of text and using that extracted style to generate new text that’s written in the voice of the original text. 

In this episode, Jeff creates an AI and names it “Theodora.”  She’s trained to speak like a presenter giving a Ted Talk.  The result varies from believable to utter absurdity and causes Jeff to reflect on the continued inability of individuals, AI, and large nonprofits to distinguish between good ideas and absolute madness

 

Three bits of raw output from Theodora. These were text files were sent to Google Cloud’s TTS service for voicing.

 

On the creation of Theodora:  Jeff used a variety of free tools to generate Theodora in the episode.  OpenAI’s Generative Pre-trained Transformer 2 (GPT-2) was turned into the Python library GPT2 Simple by Max Woolf, who also created a tutorial demonstrating how to train the model for free using Google Colab.  Jeff used this tutorial to train Theodora on a corpus of about 900 Ted Talk transcripts for 5,000 training steps. Jeff then downloaded the model locally and used JupyterLab (Python) to generate new text.  That text was then sent to Google Cloud’s Text-To-Speech (TTS) service where it was converted to the voice heard on the episode. 

Producer: Jeff Emtman
Music: Liance

 
 

James Li aka. “Liance.” Photo by Alex Kozobolis

This Painting Doesn't Dry album art (4000 x 4000).jpg

Sponsor: Liance

Independent musician James Li has just released This Painting Doesn’t Dry, an album about the relationship between personal experiences and the story of humanity as a whole.

James made this album while he anxiously watched his homeland of Hong Kong fall into political crisis.