참고

[1] Richens, J., & Everitt, T. (2024). Robust agents learn causal world models. International Conference on Learning Representations (ICLR).

[2] Scholkopf, B., Locatello, F., Bauer, S., Ke, N. R., Kalchbrenner, N., Goyal, A., & Bengio, Y. (2021). Toward causal representation learning. Proceedings of the IEEE, 109(5), 612-634.

[3] Pearl J.(2009), Causality. Cambridge University Press.

[4] M. Harel (2019), Lmu, cmsi 498: your window into the cromulent world of cognitive systems.

[5] Bhattamishra, S., Patel, A., Blunsom, P., & Kanade, V. (2024). Understanding in-context learning in transformers and llms by learning to learn discrete functions. International Conference on Learning Representations (ICLR).

[6] Prompt engineering. (2024). OpenAI. Retrieved June 4, 2024, from https://platform.openai.com/docs/guides/prompt-engineering

[7] Wen, Y., & Chaudhuri, S. (2024). Batched Low-Rank Adaptation of Foundation Models. International Conference on Learning Representations (ICLR).

[8] Mirzadeh, I., Alizadeh, K., Mehta, S., Del Mundo, C. C., Tuzel, O., Samei, G., ... & Farajtabar, M. (2023). Relu strikes back: Exploiting activation sparsity in large language models. International Conference on Learning Representations (ICLR).

[9] Cho, D., Yang, J., Seo, J., Bae, S., Kang, D., Park, S., ... & Lim, W. (2024). ShERPA: Leveraging Neuron Alignment for Knowledge-preserving Fine-tuning., ICLR 2024 Workshop on Understanding of Foundation Models (ME-FoMo)

[10] Li, X., Yu, P., Zhou, C., Schick, T., Zettlemoyer, L., Levy, O., ... & Lewis, M. (2024). Self-alignment with instruction backtranslation. International Conference on Learning Representations (ICLR).

[11] Qi, X., Zeng, Y., Xie, T., Chen, P. Y., Jia, R., Mittal, P., & Henderson, P. (2024). Fine-tuning aligned language models compromises safety, even when users do not intend to!. International Conference on Learning Representations (ICLR).