Skip to main content
 

Wordcraft Writers Workshop

“Because the language model underpinning Wordcraft is trained on a large amount of internet data, standard archetypes and tropes are likely more heavily represented and therefore much more likely to be generated. Allison Parrish described this as AI being inherently conservative. Because the training data is captured at a particular moment in time, and trained on language scraped from the internet, these models have a static representation of the world and no innate capacity to progress past the data’s biases, blind spots, and shortcomings.”

· Links · Share this post