Researchers have created a 13B vintage language model called talkie-1930-13b-base trained on 260B tokens of historical pre-1931 English text to simulate conversations with people from the past. The model's performance is compared to its modern counterpart, with talkie underperforming in some standard LM evaluations but showing similar performance on core language understanding and numeracy tasks.