Long time no post…. Now I’m just here to check in and make sure the site’s still working. :-). Well, and to say that I’ve moved to Google’s new Durham office, where I’ll keep working on the same networking problems as before. It’s good to be back in my home state, were the kids can hang out with family.
Wow! Another year gone by with no posting at all. Time flies when you’re having fun.
Work continues to be great. We’ve added a new child to the mix, which is helping keep me busy. I’ve been cooking less, though I still get to do it on the weekends regularly. For Christmas this year I got a new Ice Cream Maker — WAY better than the old one we had — no longer do we have to deal with loading ice and rock salt — we just have to freeze the bowl in advance. We also got an Instant Pot – a programmable pressure cooker. Now I just have to figure out how to use it effectively. Early attempts have had mixed success.
Hard to believe it’s been so long since I have posted last…. The last time I posted, I was still a Purdue employee. Now I work at Google, which I might add has been a blast. I’m very much enjoying it.
After a stint of using a Mac Mini as a server, I’m back to using a Linux machine as a server. While the Mac Mini was somewhat adequate as a host, I really missed the Linux package management, ease of configuration, flexibility, and server side monitoring tools that didn’t assume I live and breathe my whole live using one OS. I don’t promise I’ll post a lot more frequently, but rest assured that I will keep running the site.
As an undergraduate (and for a brief period once I graduated), I worked on areas of discrete math, including Venn diagrams. In fact, I helped develop the constructive proof that rotationally symmetric Venn diagrams existed for all prime numbers of curves . Note that for non-prime numbers of curves, rotationally symmetric Venn diagrams are known to not exist.
However, these diagrams (graphs) are not “simple”. A bit later, I helped Ruskey, Savage, and Weston prove that you could produce “half-simple” rotationally symmetric Venn diagrams for prime numbers of curves . But it has remained unknown whether they could be simple. Today, I read that it has been shown that for 11 curves, they can be simple. See Ruskey’s writeup for pictures , or this blog post  by Adrian McMenamin for a discussion of what simplicity means and an overview of the technique.
Note that while I made my contributions, Ruskey and Savage are considerably more established in this area. Frank Ruskey is particularly well known for his work in graph algorithms. So when I talk about my part of this research, while I had my contributions, it is of course but one small part of the scientific process. It’s nice to see that this has continued, and to feel like I was part of helping make this happen.
 Jerry Griggs, Charles Killian, and Carla Savage. Venn diagrams and symmetric chain decompositions in the Boolean Lattice. Electronic Journal of Combinatorics. Volume 11, January 2, 2004. [http://chip.kcubes.com/research/venn/v11i1r2.pdf] (An article about this result appeared in Science, Vol. 299, January 31, 2003 and it was the subject of a front page article in the January 2004 issue of SIAM News. Additionally, this work was featured in the December 2006 issue of the Notices of the AMS.)
 Charles Killian, Frank Ruskey, Carla Savage, and Mark Weston. Half-Simple Symmetric Venn Diagrams. Electronic Journal of Combinatorics. 2004. [http://www.cs.uvic.ca/~ruskey/Publications/HalfSimple/HalfSimple.pdf]
When I learned about the ACM Author-izer Service (http://www.acm.org/publications/acm-author-izer-service), I was initially pleased. This is intended to be a way for authors to freely share access to the “authorized” ACM version of a conference publication. I didn’t jump right out and use it though, because it’s just not high on my priority list to change things up.
More recently, I saw someone else use it, so I reconsidered whether I should begin using it. I had clicked through to a paper, selected the author-izer option, and was all ready to get my link, when I saw that it wanted to know the URL of the page that would be referring to it.
Uh, sorry, but that’s not something I can provide. Not that I don’t know some (most) of the URLs that might be referring to it from my site, but they don’t allow multiple URLs. Just one. (per author). So I could put the link on exactly one webpage (well, URL anyway). That just doesn’t work for me. I have bibliography items listed in at least 5 places, and most in at least 6. There’s my departmental profile, my departmental home page, my online CV, my personal website, and my research group’s website. Then, in most cases, there is a blog entry written about the article too, that provides a link to the article.
So, thanks but no thanks. Maintaining consolidated download statistics just isn’t that important to me, nor is having the “authorized” version of the paper available to my readers.
However, in reading about this, I did learn that I need to make sure all my ACM papers on my site are “preprint” versions, meaning they are specifically NOT the ACM authorized version. Just as well, since for some papers, we want to fix typos, too.
In case you are reading this wondering why all the complexity, it has to do with copyrights. At ACM venues, when authors publish work there, they reassign their copyright to ACM, for reasons that are debatable, and have been debated my many, recently (search the web for Open Access, ACM, and USENIX). Authors only retain the right to distribute “preprint” versions of their paper. Granted ACM’s policy is better than some publishers, though not as good as USENIX.
So in conclusion: if you maintain only one website where you post links to papers — great. Use Author-izer. Otherwise, I’m not sure its worth your time.
So I’ve been annoyed by the increasing use of URL shorteners. What is a URL shortener you ask? Well, it’s a service, provided by an owner of a short domain name, that provides URL redirection—providing a short URL you can use with services like Twitter or Facebook, but originally conceived for handling URLs in emails to avoid problems with copying and pasting when the URL wraps onto the next line. When you follow a short URL, your browser contacts the owner’s web server, and is given a 301 or 302 error code in an HTTP response that tells the browser where to locate the correct content, which it then does in succession. Popular services include tinyurl, bit.ly, goog.le, and t.co.
So why am I annoyed by it? Mainly because I cannot see where the link is going to end up. I don’t like clicking on links I get from friends unless I know what website I’m going to end up on. This is primarily due to concerns over the potential that the link might take me to a malicious site, or a site I might find offensive. But a number of friends post links that I think I might find interesting–yet I never follow them because they are shortened URLs.
A secondary concern is the tracking done by the URL shortening company. It allows a third party company to track all references to a destination site through the posted URL, which to me seems like a loss of privacy.
Granted – for services like Twitter, which have a small upper-bound on the message size, something like this is needed—albeit only because of the somewhat arbitrary limits placed on message sizes. So what solution might I offer? For my primary concern (knowing where you’re headed before you visit the website), it might be good if browsers recognized the URLs for short URL providers, and used a HEAD request to determine the actual destination to present to users. Alternately, a browser might include a new feature for requesting a HEAD request for a link instead of a standard GET request. The HEAD request, rather than opening the URL, just asks the server for the headers the website would contain, and could be presented to the user so they know what is healthy and normal.
I learned this morning that our submission to the 2012 USENIX Annual Technical Conference (ATC) has been accepted for publication. As with other paper announcements on this blog, I am merely sharing the good news, and forward-referencing an eventual post describing the paper at our research group website (http://www.macesystems.org/). Briefly, the paper is about supporting a new failure model for programming large scale distributed systems, allowing those systems to ignore crash-restart failures using our otherwise pre-existing Mace programming model. Sunghwan Yoo is the main student author on the paper, and it is done in collaboration with Terence Kelly at HP Labs, Hyoun Kyu Cho—a prior intern of his, and Steve Plite—of the IT staff of the CS department at Purdue.
In the quest to be paperless, both for environmental reasons, and also for being gentler on my back, I seek to not have to carry around printouts of documents as easily accessed using my tablet, phone, or computer. However, I have increasingly grown frustrated that a major hindrance to my attempts to be productive without paper are hampered by the FAA, and its historical policy preventing any electronics from being used during takeoff and landing. Since on many of the flights I ride this is more than half of the flight time, my air travel time has been wholly unproductive unless I have paper documents to work with. I was prepared to write a post criticizing this, questioning whether any data supported the risks, and considering how a tech company like Apple or Amazon could score major points by convincing the FAA to certify their tablet devices for takeoff and landing usage. Accordingly, I was very encouraged to read in articles such as this one at Ars, that the FAA is in fact planning to take a “fresh look” at the use of certain electronics devices during takeoff and landing.
If such efforts fail, I next wonder what tech can produce – some ability to create a display before takeoff that can be “frozen” during a “powered-off” state, that might be large enough to keep me busy reading during takeoff and landing…
I just got done replacing my iPhone screen, which I shattered over the break. The part was $30 on Amazon, as compared to a $130 third party repair, or probably a $200-300 Apple/AT&T repair. Yes, I am feeling pretty satisfied right now. 🙂
There are plenty of good videos and resources out there, so I won’t add my own, but would just like to say—yes, it can be done!
So I learned last night that our submission to NSDI was accepted. It described the methodology for a tool we built which we can Distalyzer. Distalyzer works to help developers diagnose performance problems in their systems. It works by utilizing a minimal amount of structure from the logs, and then doing two kinds of analysis (t-tests and dependency networks) to discover the most relevant and most divergent aspects of groups of log instances. We have used it to find bugs in Transmission and H-Base, and applied it to other another system’s problem (TritonSort) to show reductions in effort needed to diagnose the problem.
Soon there will be a blog post at our research group website which describes it in more detail, but I wanted to go ahead and post about the good news.