Zero-shot non-interactive vibe-coding


I have been researching how to vibe-code or generally how to use AI to code for me. There seems to be two approaches - one for very small stuff and one for very big stuff.

Small stuff goes like this: just ask AI to make a script etc., run in directly in the AI chat or download and run locally.

The big stuff goes like this:

  • create specifications
  • install cursor / claude code / aider or anything similar
  • let the AI agent build it iteratively, maybe sometimes hint it how to proceed
  • (optional) during iteration create tests and feed that in the loop too

I have lately had some success with a hybrid method suitable for projects a bit bigger than the small stuff but where I do not have to have that full AI agent loop set up. And it is free. And it can be more non-developer friendly than the big approach too, when it works.

The non-interactive (big context) method

  • create detail specification - this is the same as in the big method
  • open Google AI studio, select Gemini 2.5 Pro Preview and have it code the whole thing in one go
  • download all the files and run it

You still need the spec. You can do it iteratively in any AI chat, just make sure you end up with a detail spec as if you were handing this to a developer and letting him code it on home office for the next few days without stand-ups.

For the AI generation, this in theory should work with any model. But in practice for this to be successful and meaningful you need a model with pretty big context window. Currently that is only Gemini 2.5 and maybe since last week also Llama 4 (which is still not available here).

Downloading all the files can be a lot of copy pasting. But you can ask the AI to pack it up for you so it becomes one copy-paste action.

If the result is not good enough, maybe the spec was not good enough and you need to iterate the whole thing again. Or you may ask in the chat to do small tweaks too. But if you are successful, you leapfrog the incremental approach to a minimal-MVP in just one go.

Now examples with full prompts and how to do it in practice:

Example 1: Tetris - play the game after 2 short prompts

Let’s recreate a game of tetris!

Creating the spec was made easy by a lot of existing texts about tetris, so in this case (but not when you are creating something new and unique) this step can be delegated to perplexity with a minimal prompt:

create a specification of game tetris. research the game on the internet, then write down a detail specification of the game engine, game controls, game ui and everything a programmer from another planet would need to create such game. do not write any code, just the specification

results in this chat or this spec in markdown. The spec is quite good, detailing all the shapes, game mechanics etc. Should be good enough for a programmer to follow on home office.

Now log into Google AI Studio, select Gemini 2.5. Use system prompt:

you are a full-stack javascript developer. you never use typescript. you will be given a specification of a graphical game and you have to create that game as a static web site with a html, css and javascript. read the specification carefully, follow all parts and implement it all exactly as specified.

and the first message in chat:

create a game of tetris according to this detail specification:
~~~ content of the markdown specification above ~~~

This results in AI spitting out the few files that implement the game. In this case that was only 3 files, so 3x copy-paste would not be so hard. But to follow the method and also make this work when there are 20 files, followup in chat with:

now take all the code for all files that you generated in this chat and make one huge bash script out of them. when the script is run, it will place the file contents in the right files in the right structure as you have describe. do not change the code any more, just wrap it in one bash script full of ‘echo’ and ‘cat’ commands that create it all in one go.

The result is a bash script, that can be downloaded as install.sh, executed and then index.html can be opened in browser and voilà there is a playable game of tetris done in very minimal prompting.

tetris

For completeness I also tested the same method with some other free AI tools. But I was not so successful:

  • Claude Sonnet 3.7 was promising with the artifacts GUI. But in the middle of javascript generation I got “Claude hit the max length for a message and has paused its response. You can write Continue to keep the chat going”. I did write “Continue” but this meant the model had to re-read its previous output, it started rewriting parts of it and before it could finish the new version it hit the max length again. I tried a few more loops, but did not get much further.
  • Deepseek R1 (via cerebras) was almost there. The game is playable for a few secs - for the first block at a very high speed, but the second block never emerges. Maybe with a few spec tweaks or luck this could also finish the task.
  • o3-mini was almost right. It even offered to download all in zip instead of the bash/cat/echo hack. The resulting game was almost like tetris, playable, but the blocks were not moving (stayed up, until manipulated and dropped with space bar). Playable but without the time pressure and levels a bit boring.

Example 2: app with server side code, db, and a few use cases

I also used this method to make ctene.cz which is a site for Czech children learning to read who need funny easy texts to practice.

The site now runs an updated version with many tweaks added, but I have tried to redo the MVP with this method, just to have another example and prove it can be done.

First I made this specification. This time not by perplexity, but by actually writing parts and then using AI chat to iterate, fill in open points, add some stuff, etc. But the final spec is quite detailed and should be good enough for a home office programmer.

Then I opened new Google AI Studio chat. System prompt is similar as in first example:

you are full stack javascript developer. you follow specifications carefully, making sure every bit is implemented as was requested. you never use typescript, only javascript.

followed by:

create a system according to this specification:
~~~ the content of spec above ~~~

and after 297 seconds I got all the individual files and bunch of instructions where to put them and how to install it. But I chose to pack it with:

Now take all the code for all files that you generated in this chat and make one huge bash script out of them. When the script is run, it will place the files’ contents in the right files in the right structure as you have described. Do not change the code any more, just wrap it in one bash script full of ‘echo’ and ‘cat’ commands that create it all in one go. Add the installation commands at the end. The resulting script should create all files, do all installation steps and launch the app in development mode assuming the GEMINI_API_KEY is already in environment.

and after another 294 seconds there is one file to download, and execute.

This time it did not work perfectly. There were these four errors in the codebase. Maybe not totally easy for non-developers to fix and also hurts the vibe a bit. So either the AI needs to get a bit better still, or the spec needs some stuff added and temperature tweaked. You can also try it a couple of times and maybe get lucky. Anyway, it is almost almost there to be extremely usable even for this relatively more complex example.

After the four fixes, the application does start and is quite nice and usable:

Conclusion

Gemini 2.5 is very good. With it’s super large context window it opens new possible approach to code generation that is free, sometimes can be friendly to non-developer and is suitable for projects that are much more complex than what in-chat code generation could previously do. It is an alternative way to kickstart MVP, need the developers to join in later, maybe try out several different minimal MVPs in parallel.

On the other hand, this does not replace the full blown coding agent loop with cursors / aider / claude code , if you are already using / paying for that and need it for even more complex use cases.