Home QA Basics QA tools

Some notes on tools

In the discussion of quality, it is tempting to emphasize testing tools like Selenium or expensive testing suites and the virtues of a bug database.

While they have their place, the view that quality assurance is an integral aspect of development, from writing the specs, PRDs, MRDs or whatever you call the documents that guide your work, implies more.

Modern IDEs, by their very nature, make it easier to write quality code. Crucial database backends now store data in “real” unicode, not just some old compatibility format. They handle internationalized code much better than their older counterparts. The same is true for technical authoring software. Translation memory technology and machine translation do their part.

Sometimes there will be pleasant surprises, sometimes nasty ones. Big companies with bulky proprietary systems, like Oracle, often are at a disadvantage in this area. For example, a bug database (or issue tracker as the more modern term goes) that cannot store Russian or Chinese in a human readable format, is certainly not a quality asset.

Some tools are more important than others for internationalization. At the IDE/source code level, you want to use an extensible source code analyzer like PMD to catch issues early.

You also need tools to help with basic tasks, such as comparing files. Beyond Compare is a really good one for that. Of the many screen capture tools, SnagIt has proven invaluable.

Analyzing network traffic can be done in IDEs but you may still want a tool that does this independently. WireShark is a great protocol analyzer and “must have” for Windows and Unix. An easier one that helps a lot is Fiddler.

Tools I wrote in the past to supplement standard tools

For various jobs, I wrote tools to fill specific needs. Some tools are simple, others were quite elaborate. I will list the important ones here in the hope that someone finds one or the other idea useful.


PathCopy/Xcopy for Windows

You can get this in the download section. I wrote it to make listing files easier and to be able to handle files with the same names in different directories. The tool also puts a “copy” batch file onto the clipboard to make repeated copying of files easy.


Document compare (was integrated into the Microsoft internal MSDN tools)

This was a Visual Basic tool set I wrote for comparing document formatting. MS Word documents were sent to translation with certain formatting requirements. The tool compared paragraph and text formatting, and tables and would generate a preview of a given page in html. HTML was the final output.


Web editing of ui resources

The tool displayed various formats of ui resource files in tables on a web page. Check-out from source control as well as check-in were part of the tool. Search and edit features were integrated. Instead of having to make each an every string change myself, remote users without any knowledge of source control or file formats, could do their thing.


In-place file diff

Commercial file diff tools were really good at comparing physical files in two locations. But our scenario was different. Files from source control existed in only one location on the computer. The source control system at the time did have a change log but it did not tell us about what specifically was different. Was there a change of the strings (we need to do something) or just a change in the list of authors of the file or just a new timestamp (we can ignore this)? I could have copied the existing files somewhere, synchronized the new ones, then diffed. Instead, I wrote a program that built a list of the existing files with attributes like date and time and size but also, crucially, a content checksum. This was stored locally. The program would run every hour, get the latest files from source control, run the check, then compare this to the existing data for the previous version. It would then zip up files that had “actionable changes”. An email option could be invoked to send the zip file as an attachment to someone. This tool did catch some late weekend checkins and allowed an i18n build engineer to do a better job.


Web QA

Before Selenium, automating Internet Explorer (via Visual Basic) gave one company a sweet, reliable QA tool. It would record web page addresses for automatic replay. It had an integrated bug editor (via csv file) plus screenshot capability. Its day of glory was when it took a thousand screenshots in multiple ui languages of a phone emulator running a WML based mobile component. One innovative feature was how the last version of the tool dealt with changing ui component names.

What does this mean in lay terms? Say, you have a web page with an address form. You set up a testing “script” that steps through the form, puts in data, and sends it to the server. Now, for reasons unknown, a change occurs. Say, the field “postcode” gets changed to “zipcode”. The next test run will fail unless you are told of the change and make that same change in your “script”. I came up with a heuristic solution. If the script could not find an element by name, it would “click on elements” in a spiral fashion starting from the last known location of the field or button. And it worked for all but the most radical ui changes.


Unified view of disparate source control systems

Big Company (BC) had a voracious acquisition appetite. The i18n team was one of the teams that had to work with the fact that the acquired folks used all kinds of different tools and systems, most importantly different source management systems, such as Perforce, Source Safe (indeed), subversion and others. While they would eventually (two plus years) move to the Big System (BS) of the Big Company, life needed to go on, releases needed to be made, customer issues needed to be resolved.  So, I wrote my own little tool that could display source control contents of different systems, synchronize and check-out files in a unified web page view. The three core elements were 1) a telnet library for Java 2) some javascript for the web page 3) an ftp library.

Predefined or manually modified commands from an input field on the web page would go to the app server, given to the telnet library, then sent to the source control server. Which would do its thing. The captured shell output would be processed by my app server, transformed into html links and shown on the web page. If I clicked a directory link, it would expand, if I clicked a file link, the ftp library would kick in and transfer the file to my machine. It no longer mattered whether the backend was subversion, perforce, or BS.


Various private tools “just because”

In the download section of this site, you can find several other tools that originated in my spare time and found their way into the company I worked at at the time. They were part learning experience (for example, what does ICU do for sorting), part “why do I have to go to this place and then to that to <get u-escapes, see a character in this strange charset, etc.>