Is it a bit embarrassing to put a really, really simple thing out into the world?

Let’s find out:

A Really Really Simple Thing

If taking a look at the whole script is your thing, feel free.

So why? The amount of ground broken by that little script above is negative. It’s more like the tamper you use before laying paver brick. It compacts ground; it unbreaks ground.

There’s an honesty here, I think, and a chance to talk about progress. As I’ve played with the Open AI API in my own ways, it’s useful to have a base script to start from.

But this isn’t actually where I started from. I began with a purpose-built plan in mind to feed in structured data and get results loggable in a spreadsheet out. What’s not represented here are the specific quirks and checks associated with those earlier…I won’t even say experiments, just pokes at the system.

And that’s ok. I spend time thinking about the structure of mental processes, the right way to conceive of something. It can be tempting to enforce a particular general to specific flow for this, a one way street along which ideas can travel. It can be tempting, for me at least, to stay out of conversations where I feel like I’m splashing around in the shallow end.

My aim, and my recommendation, is to own it, though. Acknowledge limitations, focus on results, have fun with your current capabilities, and stretch those into the future.

A Step Through

 1 name = 'topic of chat discussion'
 2
 3    #inputs
 4    system_msg = open('msgs/system.txt','r').read()
 5    user_msg = open('msgs/user.txt','r').read()
 6    doc = ''
 7
 8    print("\nIs there an associated doc? y/n \n")
 9    doc_choice = input()
10    if doc_choice=='y':
11        filename = 'docs/' + name + '.txt'
12        doc = open(filename,'r').read()

After playing around with structures, I came to this. It was reasonably able to efficiently pull info in. I was looking for flexibility, speed, and simplicity as I made this available to some barely-technical loved ones.

48    #raw outputs
49    output = response["choices"][0]["message"]["content"]
50    print(output)
51    message_history.append({"role": "assistant", "content": output})
52    outputs.append(output)

I stripped away a lot of complexity to get to this point. I continue to take a look at outputs and prepare a file to save, but I began this process capturing programmatic outputs and trying to coerce them into a spreadsheet so I could look at all the results later, and that was more prone to fail. This is easier.

61    print("\nEnd chat? y/n \n")
62    decision = input()
63    if (decision=='y'):
64        more_chat = False
65    else:
66        print('\nAdditional prompt?\n')
67        additional = input()
68        message_history.append({"role": "user", "content": additional})

I didn’t have much to say about the parameters to the API call above. It wasn’t an interesting programming problem, but it did teach me something about how these systems work under the hood. That was valuable. As far as here, the structure of the chained history, I’m sure, looks different in the internal representation of whatever machine the model is running on, but I’m not too proud to admit it took me a minute to come to grips with it.

71    timestr = time.strftime("%Y%m%d-%H%M%S")
72    text_name = name + "_outputs_" + timestr
73
74    with open('%s.txt' % text_name, 'w') as fp:
75        for item in outputs:    
76            # write each item on a new line
77            fp.write("%s\n" % item)
78
79    #system command to dump chat history
80    os.system('cp %s.txt desired_path' % text_name)

This was the end, take everything the model spat out and write to a file. There are interesting applications of this, that I’ve branched out to from here, or pruned back from as the case may be. Feeding a Mail Merge template was one, so you can end up with formatted documents more easily and more specifically that what the LLM just outputs.

Conclusions

This was a helpful little learning experience for me at the time, and it was a stepping stone to bigger and better things.

Simple is good; simple is fine. Don’t pretend simple is complex, but don’t denigrate the worth of simple.

Whole Script

 1    name = 'topic of chat discussion'
 2
 3    #inputs
 4    system_msg = open('msgs/system.txt','r').read()
 5    user_msg = open('msgs/user.txt','r').read()
 6    doc = ''
 7
 8    print("\nIs there an associated doc? y/n \n")
 9    doc_choice = input()
10    if doc_choice=='y':
11        filename = 'docs/' + name + '.txt'
12        doc = open(filename,'r').read()
13      
14    import os #syscommands
15    import openai #api tools
16    import time #timestamping
17
18    # Set openai.api_key to the OPENAI environment variable
19    openai.api_key = os.environ["OPENAI_API_KEY"]
20
21    #outputs
22    outputs = [] #raw responses
23
24    more_chat = True
25    message_history=[{"role": "system", "content": system_msg},
26                    {"role": "user", "content": user_msg}
27                    ]
28    if doc_choice=='y':
29        message_history.append({"role": "user", "content": doc})
30
31    while(more_chat):
32    
33        try:
34            response = openai.ChatCompletion.create(
35
36                model="gpt-4o", #update as needed
37                
38                temperature=1, #range 0-2, default 1
39                top_p= , #range 0-1, default 1
40
41                frequency_penalty= 0, #range -2-2, default 0, multiplicative based on uses
42                presence_penalty= -0.2, #range -2-2, default 0, additive for all used once or more
43
44                messages= message_history
45
46            )
47
48            #raw outputs
49            output = response["choices"][0]["message"]["content"]
50            print(output)
51            message_history.append({"role": "assistant", "content": output})
52            outputs.append(output)
53
54        except openai.error.RateLimitError as e:
55            print(e)
56            break
57        except openai.error.APIError as e:
58            print(e)
59            break
60
61        print("\nEnd chat? y/n \n")
62        decision = input()
63        if (decision=='y'):
64            more_chat = False
65        else:
66            print('\nAdditional prompt?\n')
67            additional = input()
68            message_history.append({"role": "user", "content": additional})
69
70
71    timestr = time.strftime("%Y%m%d-%H%M%S")
72    text_name = name + "_outputs_" + timestr
73
74    with open('%s.txt' % text_name, 'w') as fp:
75        for item in outputs:    
76            # write each item on a new line
77            fp.write("%s\n" % item)
78
79    #system command to dump chat history
80    os.system('cp %s.txt desired_path' % text_name)
PARS Score
P◼◻◻
A◼◻◻
R◼◻◻
S◼◻◻