erm....a follow up post by someone...
For those who have watched the video, and did not notice it because they don't yet have the required experience in DCS Lua scripting (i.e.: they are exactly the people who would try this): the code the Chat Bot (it's chat bot, not a coding bot, so what it produces is supposed to be small talk) doesn't work and contains multiple errors because the code is copied nilly willy from different sources using different frameworks. If you know how to code for DCS, it's obvious. If you don't, the answer seems as convincing as the (exceedingly nice! ) bit about the Ju-88 being a prime 4 engine bomber. Chat bots are 'yes' bots, and their answers are accordingly airheaded.
The narrator of this video appears to have a lamentably low-skill understanding of DCS scripting and ChatGPT: he can't spot obvious mistakes, and asking the AI (the 'yes'-bot) if it understands you isn't proof of the AI's reflective ability - it's not reflective at all - it's a test of the bot's ability to say yes. It does not understand what you want. It understands that it has to grab code pieces that have 'DCS', 'Lua' and 'mission' as tags, and string them together in a manner similar to other code pieces; that's just like telling an art bot to draw an image in the style of Picasso.
Worst, the video suggests that the code would work, but never tests the code that the bot creates. Let's be generous and say that the narrator overlooked producing this part, perhaps because he ran out of time, and not because it would have severely diminished the video's attractiveness.
So both generated scripts contain bad, obvious errors, for what essentially are indeed trivial things. Of course, @Hog_driver knows this, and I believe "I dunno, judge yourself" was exceedingly tongue-in-cheek. The problem here, to me, it that the latter part goes over the head for those to whom it may be relevant.
Apologies if I'm (again) elaborating the obvious.