Andreij Karpathy has a nice zero to hero lecture series that you can follow along and it will end with building your own simple GPT. First lecture is building your own MLP (multi-layer perceptron). At the end of that you have down backtracing and will finally understand what it means to train a model. https://karpathy.ai/zero-to-hero.html And LangChain is your friend if you want to use GPT as a component in a processing pipeline (as in integrating with Wolfram's alpha, etc.)
Sometimes love
is really just a (g)love
comfort for the cold
( This is like saying
that ain't amour,
it's just (g)lamour.)
Sometimes you wear it
Sometimes it fits
Sometimes it doesn't
All garments
get old
tattered
But when it is just that: L-O-V-E
(With no hidden letter
Preceding the matter)
Then it is like butter
Melting
In the frying pan
Or the sharp pain
of a babe
biting on tender nipples
Yes, it is like Milk
Warm
And Nourishing
Or the hot unshed Pulsing Blood
of a dear Son
held down in Sacrifice
The Sultan of the Hearts
Mixes a specific potent mix
For each specific Lover
Love is sweet and bitter wine
poison
potion
Part Pain
Part Pleasure
Always to treasure.
Permit me to puke. I can not believe that this author of bodice-rippers for the pseudo-intellectual set is actually taken seriously as a "philosopher".
It is also like documenting code: there is too much left to the discretion of the poster. Finally, tag centric viewing is a channel for 'spam'. For tags to be effective in addressing the problem at hand, Hubski community must very carefully vet and approve the tagset. And the tag selected from the list (perhaps via autocomplete to make it not tedious). It's a tough problem. I don't believe Hubski has solved the community communication and info overload problem. It is a difficult problem. This is is NO WAY to distract from the incredible work that has gone into this great little community and platform. ([edit] mk: note that the stackoverflow has partially solved this problem by crowd sourcing the editorial system.)