References
Biancotti, C. & Camassa, C. (2023), Loquacity and Visible Emotion: ChatGPT as a Policy Advisor, Bank of Italy Occasional Papers 814.
Cowen, T. and Tabarrok, A. (2023), How to Learn and Teach Economics with Large Language Models, Including GPT, George Mason University Working Paper in Economics 23-18.
Eisfeldt A. L., Schubert, G. and M. B. Zhang (2023), Generative AI and Firm Values, NBER Working Paper 31222.
Hansen, A. and S. Kazinnik (2023), Can ChatGPT Decipher Fedspeak?, mimeo, Federal Reserve Bank of Richmond.
Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C., Mishkin, P., … & Lowe, R. (2022), Training language models to follow instructions with human feedback, Advances in Neural Information Processing Systems, 35, 27730-27744.
Kandpal, N., Deng, H., Roberts, A., Wallace, E. & C. Raffel (2022), Large Language Models Struggle to Learn Long-Tail Knowledge, arXiv preprint arXiv: 2211.08411.
Korinek, A. (2023), Language models and cognitive automation for economic research, CEPR discussion paper 17923.
Kojima, T., Gu, S. S., Reid, M., Matsuo, Y., & Iwasawa, Y. (2022), Large language models are zero-shot reasoners, arXiv preprint arXiv:2205.11916.
Noy, S. and Zhang, W. (2023), Experimental Evidence on the Productivity Effects of Generative Artificial Intelligence.
Perez, E., Ringer, S., Lukošiūtė, K., Nguyen, K., Chen, E., Heiner, S., … & Kaplan, J. (2022), Discovering Language Model Behaviors with Model-Written Evaluations, Findings of the Association for Computational Linguistics: ACL 2023, pages 13387–13434.
Taliaferro, D. (2023), Constructing novel datasets with ChatGPT: Opportunities and limitations, Voxeu, June 15.