I recently ran into a memory issue with Python: a long-running script would use more and more memory until the kernel would eventually kill it. Python doesn't have "memory leaks" in the same sense that lower level languages do, but it is possible to tie up memory by maintaining a reference to it somewhere that you forgot about. As it turns out, there are some awesome tools for troubleshooting this kind of bug in Python.
I am taking the Coursera HPP course, and I just finished watching lectures 6-2 and 6-3. The visualizations of the prefix sum kernels in these two lectures are hard to understand because there are lots of curvy and overlapping arrows. I put together some cleaner, larger visualizations to show how these kernels work. Hopefully this will be of use to other Coursera students.
I recently wrote about how to write CouchDB views in Python, because I couldn't find any documentation online explaining a good way to do it. Today I'd like to tackle a similarly neglected topic: writing unit tests for your Python CouchDB views.
I've been interested in CouchDB lately, and since I'm primarily working in Python, I naturally want to use the two together. There's a pretty nice module called couchdb-python that makes it easy to get connected, create, edit, and delete documents, but the paucity of information on how to write CouchDB views in Python is laughable.
During some downtime for the holidays, I have been looking into some public bug bounty programs. One of these programs brought me across an interesting SQLi vulnerability: a value is obtained from a cookie and used in a dynamic SQL query without sanitizing. This would be trivial to exploit but for one thing: the contents of the cookie are protected from tampering by a simple "signature". This post explores whether the signature can be cracked with John The Ripper.
I have been using Full Disk Encryption (FDE) on all my devices for about 5 years now. In the first few years, I had learned that FDE was a robust defense against physical access. Then one day I stumbled across Evil Maid, a threat model where an adversary has physical access for only a brief time. This threat model has deep implications for FDE and physical security in general, but it is relatively obscure: it doesn't even have a Wikipedia page! In this post, I develop a very simple Evil Maid proof-of-concept (POC) against the default FDE configuration in Lubuntu 16.04.
The most popular posts on my blog have been my harsh reviews of the CISSP and CEH certifications. You might just think that I don't like certifications in general, and I probably would have agreed with you before I signed up for PWK/OSCP. Today, I'm going to tell you about my experience working through this unusual infosec certification.
In July 2014, I found an obvious reflected XSS vulnerability in DesignCrowd. In the interest of responsible disclosure, I submitted a report to the company at that time, and I can't remember if I ever heard back. This draft post has been collecting dust ever since, so I'm finally publishing it today.
In a previous article, I investigated the security claims of a product called JXcore. That has turned out to be one of the most popular (of the relatively few) articles on my blog. Not long after I posted it, I was informed that JXcore had fixed the security flaws that I pointed out. Taking them at their word, I updated that article with a note about this claim, but I never actually investigated the claim to see if it is true.
Recently, a co-worker was trying to figure out how to protect a node.js project from reverse engineering and modification. Of course, programmers have spent decades trying to figure out ways to allow an end user to run a program without letting the end user reverse engineer or modify the program, and I've never heard of anybody successfully doing it. At best, the program is still insecure and the developers have only managed to piss off their high-paying customers.
So naturally, my skepti-larm was blaring when my coworker sent me the link for JXcore.
About a year ago, I posted my thoughts on the CISSP certification. I recently took the CEH certification, and so I'm taking a few minutes to reflect on this certification as well.
The CISSP has become one of the hottest certifications to have (especially in the DC area) because of the growing budget for information security. But the CISSP exam itself has some major flaws, leading me to wonder if this is a valuable certification for individuals, companies, or society at large. (Disclaimer: I am a CISSP.)
I like trying to describe technical concepts to non-technical people. Everybody deserves (and needs!) a basic understanding of the things they use and rely on every day. One of the most important things you use online is your password – or hopefully many different passwords.
I've noticed recently (more and more) that reCAPTCHA is getting really hard to solve. Really hard. Actually, it's frequently impossible. Google either needs to fix it or website owners need to stop using it.
One day, I opened up my e-mail and found something unexpected:
My inbox showed a bunch of emails that appear to be from my account (“me” in the left column) and sent with no subject line between 3:36 AM and 3:40 AM. I was definitely not awake at that time, and I was definitely not sending e-mails.