-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create a machine learning algorithm to generate missions #13
Comments
GPT-3's ability to engage in "few-shot learning" means that it can figure out what text to generate simply by you providing it a few examples. This makes this task a lot more doable. |
I played around with GPT-4 recently (gave it a fully-generated mission and then asked it to reverse-engineer a prompt that would generate a similar type of mission), and got this one-shot prompt, which I then modified. Preliminary testing shows that this works only halfway before character limits causes the machine to either cut off text generation or produce abbreviated content. So this issue is definitely possible, but I would need to break this down into multiple prompts rather than one single prompt. The generated mission is also a bit too Zappy for my taste as well, so some more prompt refinement may be needed. Prompt
GPT-4
|
User Story (MVP) - Use should be able to run a neural network trained on "synthetic data" (missions generated by PARANOIA Super Mission Generator) to generate human-readable missions.
Post-MVP - Neural network is to be trained on published PARANOIA missions instead of "synthetic data". Doing this will require getting permissions from Mongoose Publishing first.
Notes - The reason why this issue is "low-priority" is that we have already gotten a good enough system for generating PARANOIA missions (aside from a few minor issues like 'complications'). Put it frankly, there is very little reason to add neural networks into the equation. Templates work fine.
The reason why this issue even exists though is the realization that neural networks will wind up being the future of text generation. OpenAI recently wrote a blog post in February 14th 2019 about a neural network that can generate text, and the generated text (though hand-selected) are miles above what I would expect a neural network to generate. The public text generator that they released was also pretty decent as well. Even though there are still subtle flaws in the neural network they have right now, those subtle flaws can be fixed given enough time and resources. OpenAI have demonstrated what is possible, and what is possible will wind up being inevitable.
gwern wrote a blog post explaining how he was able to use OpenAI's public text generator (nicknamed GPT-2-small) to create 19th-century poetry generators (GPT-2-poetry and GPT-2-poetry-prefix). This same sort of process could also be used to generate PARANOIA missions.
Thus, if one wants to stay on the cutting-edge of natural language generation, I need to eventually master gwern's process. Of course, I don't need to do so immediately - our current generator works fine for now. But I do need to start preparing for the future.
Priority - Low
"The bitter lesson is based on the historical observations that 1) AI researchers have often tried to build knowledge into their agents, 2) this always helps in the short term, and is personally satisfying to the researcher, but 3) in the long run it plateaus and even inhibits further progress, and 4) breakthrough progress eventually arrives by an opposing approach based on scaling computation by search and learning. The eventual success is tinged with bitterness, and often incompletely digested, because it is success over a favored, human-centric approach."---"The Bitter Lesson", by Rich Sutton, published on March 13, 2019
Note though that "The Bitter Lesson" assumes that (a) we do have access to increased computation (which is doubtful, considering the end of Moore's Law), and (b) we're willing to spend tons of money to use that increased computation (which is also doubtful). So even if "search and learning" (i.e., neural networks) winds up being superior to knowledge-based approaches (i.e., hardcoded templates), knowledge-based approaches may be more cost-effective and worthwhile in the short-term.
The text was updated successfully, but these errors were encountered: