Code
from rich.console import Console
console = Console()
console.print(
":rocket: Hi! I'm a researcher, teacher, podcaster, and software developer!"
)
🚀 Hi! I'm a researcher, teacher, podcaster, and software developer!
Innovating in technical areas such as software engineering and software testing, I teach courses, conduct research, write papers and a blog, give presentations, create software, and serve organizations. Working as an Associate Professor in the Department of Computer and Information Science at Allegheny College, I am an associate editor for the Journal of Software: Evolution and Process, an academic editor for the PeerJ Computer Science journal, a program committee member for conferences like the International Conference on Software Testing, Verification and Validation and the International Conference on Software Engineering, and a reviewer for journals like Transactions on Software Engineering and the Journal of Software Testing, Verification and Reliability. Along with media appearances on podcasts like Stack Overflow and Talk Python, I interview the world’s leading experts on software engineering as a co-host of Software Engineering Radio. You can learn more about me and my work by reading my biography, downloading my curriculum vitae, and subscribing to my mailing list.
Professional Service
| Venue | Role(s) | Year(s) |
|---|
| International Conference on Software Maintenance and Evolution | Tool Demonstrations Program Committee | 2026 - 2025 |
| International Flaky Tests Workshop | Program Committee Member | 2026 - 2024 |
| International Symposium on Software Testing and Analysis | Program Committee Member, Tool Demonstrations Program Committee Member | 2026 - 2023 |
| PeerJ Computer Science Journal | Academic Editor | 2026 - 2019 |
| Journal of Software: Evolution and Process | Associate Editor, Reviewer | 2026 - 2012 |
No matching items
Software Engineering
Cellveyor: Easily convey reports from Google Sheets to GitHub
Chasten: Configurable linting tool that uses XPath expressions
GatorGrade: Python front-end for the GatorGrader assessment tool
GatorGrader: Automated assessment for source code and writing
SchemaAnalyst: Data generation and mutation analysis for database schemas
Status Updates
i'm confused why no one ever thinks about the psychology behind this stuff
when a programmer tells you "my entire job is just prompting" they are just very excited about discovering a novel way of working
every tool whether it's neovim or AI or functional programming starts off with a period of overuse because we're all looking for the feeling of finding a secret everyone else is missing out on
the novelty also makes you literally work harder (temporarily) because you love experiencing the new setup
after time goes by you eventually realize you lied to yourself about how much of an impact it was making and settle into something more balanced
anyone not self aware of this dynamic is going through it for the first time
https://bird.makeup/users/deedydas/statuses/2000472514854825985
early in my career when i was learning a new tech or language i would tinker and google whenever i hit a roadblock
eventually i realized books had all the information i needed pre-googled for me
i think this is happening again with LLMs - sometimes i waste so much time letting the LLM keep taking swings instead of reading something
hope the industry doesn't abandon producing good reading material
I fully appreciate how hyperbolic this must sound to anyone who haven't started working with the latest models. For me, the inflection point was Opus 4.5, and now the fast catch-up of Kimi K2.5. It's just completely different from what we had even last summer.
The AI hype-cyclone is bad, but so is the anti-AI witch hunt. Commits co-authored by Claude do not mean that a project has "abandoned engineering as a serious endeavor"
Would we say that accepting contributions from new developers means we've "abandoned engineering as a serious endeavor"? No.
Claude can write wrong code. New contributors can write wrong code. What matters is what you do with that code after it's been written.
You want your code to be faster, or at least not to get slower.
Step 1. Add benchmarks to CI. This will catch accidental regressions.
Step 2. You spend a day adding a feature, submit a PR... and discover you accidentally made an important code path slower. You don't know which particular commit was at fault, though.
Step 3. Add unit tests that will catch changes to speed, so you can catch potential regressions immediately, while coding a feature or bugfix. Or, that's the theory anyway.
Here's some thoughts on how to do step 3: https://pythonspeed.com/articles/speed-unit-tests/