Traditional Automatic Essay Scoring (AES) systems, which initially relied on basic text features and statistical models, have evolved with the integration of natural language processing (NLP) and machine learning techniques. The emergence of Large Language Models represents a significant leap in this field. Developed using deep neural networks and vast datasets to generate human-like text and provide nuanced evaluations of written content, Large Language Models offer great potential to revolutionize assessment practices in language education. synthesizes findings from recent empirical studies assessing the accuracy, reliability, and practical applications of LLMs in AES. Results indicate that LLMs can align closely with human raters, providing detailed and consistent feedback across various writing dimensions. However, challenges such as potential biases, ethical concerns, and the need for transparency in model decision-making processes are highlighted. The paper underlines the importance of addressing these challenges through continued research and the development of robust frameworks that combine advanced technologies with human oversight. By doing so, LLMs can enhance the efficiency and objectivity of essay scoring, supporting educators and learners in achieving better educational outcomes. The review concludes that while LLMs hold promise for transforming AES practices, careful integration and ethical considerations are crucial to harnessing their full potential in educational settings.