How to prompt GPT models to create effective tutorials

I wanted to share a valuable insight I recently learned about using GPT models more effectively, whether they are local or centralized. The success of utilizing language models like ChatGPT largely depends on how we prompt the model. Generating a good prompt is key, and I have a helpful technique to share with you.

When I encounter difficulty in obtaining the desired results, I now ask for a prompt first and use that prompt in a new chat, frequently going back and forth between the two. This approach has proven to be quite useful. It is crucial to establish an appropriate context to guide the GPT model’s responses. One effective way to achieve this is by specifying the kind of personality or role the GPT model should assume in the prompt.
For example, after an extensive back-and-forth conversation, I was attempting to understand how to configure the DHCP server to update unbound without relying on the ‘hosts’ functionality. Eventually, I succeeded in creating a reasonably comprehensive tutorial (whose correctness I have yet to test, but I will update this message stating whether the tutorial is working as expected). I have included the prompt and tutorial in the following chat log. Please take note of how the context is established right from the very first line of the prompt. Also note that the tutorial still is not perfect and it needed further refinement by pointing out inconsistencies in the generated text.

I hope this technique proves beneficial. It is quite known now that by crafting well-defined prompts, we can enhance our interactions and achieve more accurate and insightful responses. If you have any questions or would like to share your own experiences, please feel free to join the discussion.

By the way, these prompts can be saved and reused in future interactions. I read reports of people managing to get better code by prompting the model in language-specific way, where these prompts are saved in a mini-library that can be reused in future interactions.

1 Like

Yesss indeed is useful and a valuable rss. Shouldn’t be limited and should be free to use whenever for whatever. Actually should be the first thing to get in schools. An updated and accurate chatGPT. Thanks for the tip. Perhaps yours too was limited to September 2021, and would be usefull to get up-to-date commands and insghts.
Regards
G70P

Both centralized LLM models available with OpenAI are trained with dataset that included data up to September 2021. This limitation can be in part overcome with plugins that can feed more recent data. There are already many plugins available and they increase in number every day.

The space is changing at an incredible pace and are emerging other models with a smaller training set that can be fine tuned with local data and run locally. ChatGPT4ALL is the most popular code used for this purpose. When I will be able to understand a bit more than nothing (the present state) I will try to start using a local model. The sooner I move away from OpenAI the better I will feel.

Long term, there could be a model trained with everything relevant for IPFire, including Linux administration, Network engineering, firewall design, up to IPFire own documentation and code or even part of this forum. This will lower the barrier of entry to learn how to properly configure IPFire many folds over compared to now. The sooner we learn how to use well this tool, the better will be for everyone.

1 Like

As far as of my short living story of life tells me there will always be a way to code"" the game, if the one who look toward the better of everyone is cutt off the beginning of the game they’re already in disavantage.

“AI will not take your job, but someone who can use it might”
– the AI community

1 Like

:blue_heart: : yessss

Finding signal in this ocean of noise is problematic. I share with this community the highest source of pure signal I could find:

From zero to hero youtube course, by Andrej Karpathy, the former lead developer for the computer vision Tesla autopilot program. From what I can see, there is not better source at the moment for a beginner.

It looks interesting but it’s 2 hour long I’ll watch later. But yes expertize, know-how and the ability to ansewer questions fullfilling interest and knowledges when it’s needed. I’m not afraid as I’m conviced it will bring new professions as well. The history and it’s revolutions taught us that.

Actually, the hours are more than 13. It’s a playlist. And these videos are all extremely dense. You start with code from the very beginning. You need to know how to code to follow. It is a long way for people like me.

1 Like

Hopefully openAI will reveal shorts and tips you weren’t aware as well quick insights about general information available when programing or doing what you do. :wink:
As I said before in a post " I took 40 years to see the BASIC programing of throw dice" And was easier then what I tought. ( Never had no one who could teach me that at that time)

1 Like

I understand completely. I suspect I share a lot with you in this aspect. AI doesn’t make learning certain things easier for me, not one bit. It simply makes it possible.

:+1: It’s just a modern encyclopedia! Who’s afraid of encyclopedias? No problems. I’m a teacher as academical background!

To be sure we do it correctly, I’ll start using as bibliography reference in the end:
cit in; paraphrasing; quoting; as reference to

@misc{gpt4all,
  author = {Yuvanesh Anand and Zach Nussbaum and Brandon Duderstadt and Benjamin Schmidt and Andriy Mulyar},
  title = {GPT4All: Training an Assistant-style Chatbot with Large Scale Data Distillation from GPT-3.5-Turbo},
  year = {2023},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/nomic-ai/gpt4all}},
}

It has the year, date of consult, as well the program or software used, Personally think is missing a reference to openAI digital Internet searching/answering robot, Encyclopedias
Regards

Sorry for the post edit, but… also the answers from GPT should include it’s fonts and bibliography … just to be on the safe side and might we want to discard some info. Doesn’t have to give biblio in every answer :smiley: just a footer with rss used. In case of dialogs, no biblio ofcourse, just the software for binaries wich were not included on openAI calculations and logics (might we like to choose other calculation for external binaries).

GPT4all and GPT4 from openai are not the same model. The latter is closed source.

1 Like