Style-Adaptive Optimization in Software Development Using MOE Structure
DOI:
https://doi.org/10.5281/zenodo.13845335ARK:
https://n2t.net/ark:/40704/JIEAS.v2n5a06References:
15Keywords:
LLM, Machine Learning, MOE, Random Forest, Ensemble ModelAbstract
The following literature review and research is devoted to Models of human-AI interaction in software development. The following literature review and research focuses on human-computer interaction modeling in software development. It focuses on the problems of AI's inability to reasonably recognize the user's style, and its inability to complete large and complex AI projects in human-computer AI interaction applications.
Downloads
Metrics
References
Zhao, W. X., et al. (2023). A survey of large language models. *arXiv preprint*. http://arxiv.org/abs/2303.18223
Jiang, A. Q., et al. (2023). Mistral 7B. *arXiv preprint*. http://arxiv.org/abs/2310.06825
Liu, H., Liu, L., Yue, C., Wang, Y., & Deng, B. (2023). AutotestGPT: A system for the automated generation of software test cases based on ChatGPT. *SSRN*. https://www.ssrn.com/abstract=4584792. https://doi.org/10.2139/ssrn.4584792
Nijkamp, E., et al. (2023). CodeGen: An open large language model for code with multi-turn program synthesis. *arXiv preprint*. http://arxiv.org/abs/2203.13474
Melo, G., Alencar, P., & Cowan, D. (2019). Context-augmented software development projects: Literature review and preliminary framework. *arXiv preprint*. http://arxiv.org/abs/1910.08167
Wen, S.-F. (2023). Context-based support to enhance developers’ learning of software security. *Education Sciences, 13*(631). https://doi.org/10.3390/educsci13100631
Elyasaf, A. (2021). Context-oriented behavioral programming. *Information and Software Technology, 133*, 106504. https://doi.org/10.1016/j.infsof.2021.106504
Maekawa, A., Kobayashi, N., Funakoshi, K., & Okumura, M. (2023). Dataset distillation with attention labels for fine-tuning BERT. In *Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)* (pp. 119–127). Association for Computational Linguistics. https://doi.org/10.18653/v1/2023.acl-short.12
Sohail, S. S., et al. (2023). Decoding ChatGPT: A taxonomy of existing research, current challenges, and possible future directions. *Journal of King Saud University - Computer and Information Sciences, 35*, 101675. https://doi.org/10.1016/j.jksuci.2023.101675
Kirk, D., & MacDonell, S. G. (2014). Investigating a conceptual construct for software context. In *Proceedings of the 18th International Conference on Evaluation and Assessment in Software Engineering* (pp. 1–10). ACM. https://doi.org/10.1145/2601248.2601263
Gu, A., & Dao, T. (n.d.). Mamba: Linear-time sequence modeling with selective state spaces.
Abuhamad, M., Abuhmed, T., Nyang, D., & Mohaisen, D. (2020). Multi-χ: Identifying multiple authors from source code files. *Proceedings on Privacy Enhancing Technologies, 2020*(1), 25–41. https://doi.org/10.2478/popets-2020-0002
Kirk, D. (2021). Software development context: Critiquing often-used terms. In *Proceedings of the 16th International Conference on Evaluation of Novel Approaches to Software Engineering* (pp. 340–347). SCITEPRESS – Science and Technology Publications. https://doi.org/10.5220/0010469903400347
D’Avila, L. F., Barbosa, J. L. V., & Oliveira, K. S. F. (2020). SW‐Context: A model to improve developers’ situational awareness. *IET Software, 14*(7), 535–543. https://doi.org/10.1049/iet-sen.2019.0345
Wu, S., et al. (n.d.). YUAN 2.0: A large language model with localized filtering-based attention.
Downloads
Published
How to Cite
Issue
Section
ARK
License
Copyright (c) 2024 The author retains copyright and grants the journal the right of first publication.
This work is licensed under a Creative Commons Attribution 4.0 International License.