Self-Improving Python Scripts with LLMs: My Journey
Source: Dev.to
Introduction
As a developer, I’ve always been fascinated by the idea of self‑improving code. Recently, I experimented with Large Language Models (LLMs) to make my Python scripts more autonomous and efficient. Below is a step‑by‑step guide on how I integrated LLMs into my workflow, including code examples and best practices.
Generating Docstrings
One of the most useful features of LLMs is their ability to generate human‑like text from a prompt. I used this to automatically create docstrings for functions.
import llm_groq
def generate_docstring(func_name, func_description):
llm = llm_groq.LLM()
prompt = f'Write a docstring for the {func_name} function, which {func_description}'
response = llm.generate_text(prompt)
return response
def add_numbers(a, b):
# Generate docstring using LLM
docstring = generate_docstring('add_numbers', 'takes two numbers as input and returns their sum')
print(docstring)
add_numbers(2, 3)
The LLM returns a concise docstring that accurately describes add_numbers.
Optimizing Code
I also leveraged the LLM to analyze existing code and suggest performance improvements.
import llm_groq
def optimize_code(code):
llm = llm_groq.LLM()
prompt = f'Optimize the following Python code: {code}'
response = llm.generate_text(prompt)
return response
def slow_function():
result = 0
for i in range(1_000_000):
result += i
return result
optimized_code = optimize_code('''
def slow_function():
result = 0
for i in range(1_000_000):
result += i
return result
''')
print(optimized_code)
The LLM suggested a more efficient algorithm for calculating the sum, demonstrating its ability to provide meaningful optimization advice.
Automated Testing
Generating test cases can be time‑consuming. By prompting the LLM, I obtained comprehensive test suites automatically.
import llm_groq
def generate_test_cases(func_name, func_description):
llm = llm_groq.LLM()
prompt = f'Write test cases for the {func_name} function, which {func_description}'
response = llm.generate_text(prompt)
return response
def divide_numbers(a, b):
if b == 0:
raise ZeroDivisionError('Cannot divide by zero')
return a / b
test_cases = generate_test_cases('divide_numbers', 'takes two numbers as input and returns their division')
print(test_cases)
The generated tests covered typical scenarios, including handling division by zero.
Conclusion
Using the llm_groq module, I automated several aspects of my development workflow:
- Docstring generation – consistent documentation with minimal effort.
- Code optimization – quick suggestions for performance improvements.
- Test case creation – comprehensive tests generated on demand.
I highly recommend exploring LLM capabilities in your own Python projects. As I continue experimenting, I’m excited to discover new ways LLMs can further streamline and enhance Python development.