Artificial Intelligence thread

Coalescence

Senior Member
Registered Member
Currently even the latest and greatest LLMs (looking at you ClosedAI o1) sometimes crashes and burns even with simple standalone React components. They are completely and utterly useless with writing code for my legacy backend codebase. Unless I can fit the entire codebase into the context window, no LLMs can contribute without me spoonfeeding them all the relevant context... which is like 90% of the work.
I agree with this very much, especially after testing the o1-preview model and finally hitting the token limit. Before laying out the problems I still have with the newest model, the codebase I'm working with is written in Vue.js 2, and uses Element UI and IView component library. The problem that I have with the model are:
1. It keeps using syntax and methods that are only found in Vue.js 3, when I've already specified and reminded the code is in Vue.js 2. I had to tell it to use the right methods and functions for it to use in order to work.
2. After modifying the code and wanting it to add or do changes on the new code, I would give it the modified code and use that for its generation. Sometimes it revert the changes on the new code, other times it just iterates on the old code.
3. As I mentioned before, it have a tendency to change parts of the code that doesn't relate to the request. This is still a problem in o1-preview, and is very annoying having to figure out what went wrong, and copy pasting back the working portions of the old code.

The newest model is definitely smarter than before, but it having the same problem as the previous model have above but now with worse speed, I would rather stick with the old model and just iterate the solutions it manually. Also have you guys noticed you can't provide file attachments to o1-preview and o1-mini?
 
Top