Abstract
In this experiment, I explored whether large language models could engage in
free-association—like daydreaming—in a closed loop. For five open‐weight chat models, I seeded the loop with a short sentence and, at each step, used a prompt to get the LLM to generate one or two sentences that pivot tangentially from the prior text. Each response was fed back as the next input. I converted each turn into embedding space, normalized vectors, tracked cosine similarity between consecutive turns, and visualized the trajectory of the LLM’s meandering mind using t‐SNE dimension reduction and DBSCAN to identify clusters. Across models, the paths ranged widely through subject matter but repeatedly settled into recurring motifs; models tended to linger near certain topic attractors like moths circling around a few candle flames. This experiment lets us visualize how different models wander through their latent space when given freedom to explore it.
Leave a Reply