Scala

I started looking at Scala a few months ago. I’m working my way through the new (draft) Scala book, Programming in Scala, available on the Artima website, at

So far I’ve installed the SDK and can compile and run some sample programs from the command line. The next thing is to get a development environment up and running. jEdit is OK. There’s an edit mode for Scala so you get syntax highlighting and there’s a command/console plugin so you can call the compiler and run programs. The SDK documentation is apparently out of date, since the location of the jEdit files is, on my machine, under

/usr/local/share/scala/misc/scala-tool-support/jedit

But it doesn’t give you a higher-level view, like a list of classes and their members, or navigation between classes, so now I’m installing the Eclipse plugin for Scala. I’ve never used Eclipse, and I feel this is going to be a big time sink; on the other hand, it I get over the learning curve I can probably build more complex applications within it just because I can visualize and navigate the structure. There’s also an early plugin for IDEA, which was abandoned for the time being by JetBrains, but seems to have been picked up by a Scala programmer in the last few weeks. There’s also an early NetBeans plugin which I took a quick look at. For now, what seems to make the most sense is just to use a good text editor and not fuss with plugins that aren’t that far along yet.I’ve read through the Scala Tutorial (the one for Java programmers) which is a useful read. The Scala docs can be downloaded from the website, at http://www.scala-lang.org/docu/index.html. There’s a brief introduction that’s pretty readable, hosted on Artima http://www.artima.com/scalazine/articles/steps.html. There is also a Scala Wiki with code samples at http://scala.sygneca.com/. Some of the code I found hard to follow at this point, but one example on JDBC was pretty good: http://scala.sygneca.com/code/simplifying-jdbc.A. Sundararajan has a great blog on various topics related to scripting/dynamic languages/jvm-hosted languages. A great read (and one of those blogs which is worth reading previous postings of). On Scala, he has a handful of entries, here’s a search result on the blog (search “scala”): Not sure that the search link will be good for long. Some entries of his In two parts, comparison with Java:

And on singletons in Scala

There are more resources popping up all the time. On IRC, you can follow the #scala group on freenode. The official mailing lists (http://www.scala-lang.org/community/) have been split (which I’m very happy for), so there’s now a language list, a debate list (advanced) and a users’ list (less advanced). A number of people are blogging; my favorite at the moment is the new stuff on Code Commit.

Posted in scala | Tagged | Comments Off on Scala

The JVM as a Language Host

A frequent topic of discussion these days is whether, and to what extent, the Java Virtual Machine is suited for languages other than Java, and in particular “dynamic” languages like Ruby. Sun says yes, others disagree–so how do we settle it?

One line of argument says that the JVM was developed first and foremost to host Java, and that it is ill-suited for languages other than Java. A similar argument is applied against Microsoft’s CLR design, and C#: people say you can run any language on the CLR, as long as it looks like C#.

It’s a fact that many, many languages have actually been ported to, rewritten for, or written for the JVM. Some of these are experimental languages or ports, and others are used in production–for example, “scripting” languages like BeanShell. But what we don’t know, and can’t easily evaluate, is whether those languages are constrained by the JVM’s design, or not–would they be better off in another virtual machine? Can we contemplate a more universal, welcoming virtual machine which would give us more freedom in language design and implementation choices?

This is how I understand the question, but I think it’s addressing the wrong point. The JVM is a good choice because it’s behavior and internal characteristics are well-defined, and there are several–not just one–implementations to choose from, both commercial and open-source. The availability of multiple versions (and the fact that the same byte code does actually work with them, unchanged) is proof that the spec is tight enough to provide a reliable basis for developing a language in the JVM. Moreover, some JVM implementations (Sun, IBM, BEA) have proved themselves in the most demanding production environments, with high degree of availability and solid performance. Last, the JVM has proven itself to be highly portable across machine and operating system architectures. Those are good reasons to choose the JVM, at this point in time, to host new languages. Eventually, much of this will apply to the CLR as well.

My point is that in discussing this, we should all be more clear about the strengths and limitations of the JVM, and that we can derive those from the specification. What we have in hand is a VM that is already working, production-quality, high (or reasonably high) performance, and which has been ported to a number of CPUs and operating systems. Start from that as a basis. Then say–given what the specification says is available to me–what is possible to do, language-wise? The approach some people seem to take is–I have a wish list for a language, why is the JVM so limited, and why doesn’t it let me easily implement feature X?

I’m not saying we should exclude, out of hand, any other virtual machines, VM designs, etc. Obviously the JVM will be replaced with something better at some point, in terms of wide deployment and market acceptance. It seems, for example, that decisions made early on in the design of the JVM limit its applicability or usefulness for certain language features, like dynamic code replacement (hot swappable code), lack of unsigned ints, etc. It has limitations, great, let’s admit that and move on–what I’m arguing here is that there are good reasons to choose the JVM (or the CLR, if you will) as hosts for new languages, until such time as competing approaches to VMs mature and prove to be really competitive alternatives to the current mainstream.

(Orignally posted on this JRoller blog entry)

Posted in Uncategorized | Tagged | Comments Off on The JVM as a Language Host

Pragmatism(1)

The last couple of days people have been pointing out Chris Oliver’s blog on his new language, F3. F3 is a language hosted on the JVM which targets creating content in 2D, using Java2D APIs. I won’t go into the details, but I recommend you try out the demos (he’s got two or three up already). It’s an impressive piece of work from what I can see. Mostly I like the fact that the language just seems so clean.

Which brings me to this blog. Some of the comments posted about F3 are along the lines of–you can already do this in JVM language X, why create another one? My own response to this is: if language X is so great for that purpose, why doesn’t it already own the marketplace of ideas? It’s the job of the language owner, or the language user’s, to promote it; and if it’s worth it, if it’s convincing in what it does, then you’ll see people moving over to that language, and you’ll gain mind-share. Mr. Oliver has in front of him the task of gaining mind-share (assuming he wants to)–as do the Groovy, Beanshell, JRuby and other JVM language enthusiasts.

But what the whole discussion makes me think of is how successful languages are sometimes those that had a very narrow focus to begin with, despite the fact that their use grew beyond that original, limited domain. Java is one example–they (thought they) needed a language which could be downloaded over a network and run on a small, limited-capacity device. Because the code needed to move around the network, portability and security were key. It turns out that their language has uses far outside the domain of portable (or set-top) electronic devices, though it’s now there as well–but I think the language is successful because it focused on a very specific domain, and didn’t try to solve all problems at once.

And my guess is that Python, Ruby, Perl and PHP also had that sort of effect, by being useful in a given domain, and focused on developing and honing themselves for very specific tasks.

What I personally have against some of the “newer” JVM languages is that they try to solve all of the perceived problems in Java at once. I end up unsure if they’ve solved any of those problems well–plus they seem to me to be so flexible that in practice, people will end up mixing a whole number of powerful constructs into one big lump that is hard to decipher.

What I’d prefer are more domain-specific languages, and no, not a language for building domain-specific languages, but rather languages, like XSL/T or F3, written by experts (or at least very knowledgeable people in a given field) and focusing on solving one class of problems, and solving them elegantly. As far as the problems with Java, the language–I’d prefer to see a sort of “Java upgrade” (which might include a JVM upgrade) that tackles the 6 most important or annoying issues–rather than seeing a new Swiss Army knife of a language be proposed.

Check out F3–should be available soon on Java.net, but you can already try out the demos.

(originally published as this JRoller blog entry)

Posted in Uncategorized | Tagged , , , | Comments Off on Pragmatism(1)

On the road again

For a long time, I haven’t been blogging, partly because I really wanted a cross-posting with my own website, and was too lazy to set it up. I will be, therefore, courageous and just starting blogging again, as I have a whole number of ideas that keep bubbling up, and which I want to record somewhere to refer to later on. Tonight is blogging night.

Now that that is out of the way, man, do I wish I had some free time. My working life is pretty full, which, mind you, is a good thing, but the last few months I also wish I’d just had some time to hack on some open-source and personal soft-dev projects. I am already committed to two open-source projects, Flying Saucer and databuffer. DataBuffer has gotten no attention at all, have only had the chance to move the code from the original binding project (on SwingLabs) to its own home–and on Flying Saucer I’ve just been tending to the mailing list, and pointing people in the right direction when problems arise. Sadly, neither of these are really where my heart is at, though I believe that both of them are addressing problems that are core to the Java platform currently–and so there’s a conflict at hand, since the software/programming things I’d like to be working on all revolve around data–sharing, structuring, manipulating–so when I do have free time, I feel conflicted. I should, given my commitments, be working on those things I’ve committed to, but honestly, since I have so little free time, I’d rather be working on something else.

But I digress. Flying Saucer has been getting great attention lately, what with a killer article on java.netabout using FS to generate PDFs, SVG and images on the fly–I’d never thought of generating SVG with it, and I’ve been on the project for more than a couple of years now. We have a bunch of users generating images on the fly, generating XHTML on the fly, actually, then rendering to image formats. I’m hoping the mailing list will continue to thrive, and eventually, that we’ll get some more developers. Pete’s checked in some great work for our next release, R7, but we’re under-staffed right now, given what we want to accomplish and what we need to get done to make a real impact as a utility library.

I’ve been hacking around with two features–need to check those into a branch, actually–one is to support style-relative images (where images are embedded in CSS, and relative to the source/page of the CSS), and the other to support background image loading. Oh, and there’s also a cleaner model for handling XML namepaces via XMLFilters. For a long time, we’ve used NamespaceHandlers, which parse DOM Document instances already in memory for particular information–title, styles, legacy HTML–this new approach uses XMLFilters to accomplish the same thing. It’s much more readable, and once we have some generic filter classes we should be able to offer a much more robust approach to scraping important info out of incoming XML or XHTML.

The trick with these features is that they all require new relationships up and down the food chain–you need to know who owns the filters so you can attach them during the document parse, the image loader needs to notify someone as images are loaded (and the images need to be queued somewhere as you find them), etc. Actually, adding stylesheet information, to be able to track relative image references in CSS styles was kind of a hassle, because of how our current APIs for CSS are constructed (and for which I’m sadly to blame, having worked on many of them in the first place).

Hopefully, all of this will lead to improvements in the APIs. Historically, that’s been a truism of the project. We argue a lot about APIs and strategies and patterns and eventually come to an improvement. In this case, we need to clean up some of the ownership issues–like who owns filters, where the filters store what they’ve captured (assuming they are capturing filters), where images get queued, who’s responsible for creating form widgets, etc.

As for DataBuffer, well *sigh*. No time for it now. The current codebase is pretty good right now, as quite a lot of work was done on it in the last couple of years. The JavaDoc’s in good shape, for the most part. There are a lot of exciting possibilities once we get a multi-db testing framework in place to prove portability, and once we get some ease-of-use features coded, like auto-generated keys. For future development, I think we can do some rocking things around views using the decorator pattern aggressively. But first–time. Maybe around Christmas.

(Originally published as this JRoller blog entry)

Posted in Uncategorized | Comments Off on On the road again

Xilize!

So a few days ago I took another look at Xilize, and man, is it great. We’re preparing a release for Flying Saucer, the XHTML/CSS renderer project, and I needed to update (and write) a bunch of documentation. The last docs were written in plain HTML, and I’ve gotten to the point where I’d rather pull my nose hairs out with a paperclip than code in plain HTML. But I digress. Xilize is a text-markup syntax and an engine that converts the marked up text to clean XHTML. There are Xilize tags for basically all the HTML you need, and if necessary, you can always drop out to HTML syntax when necessary. But the kicker is that for the most part the syntax is very light, meaning you can express your intention to format pages with very little extra visible baggage. I was able to just write what I wanted to say, and have it come out the way I wanted.

The idea follows the Wiki idiom of basing formatting commands on plain-text entry, such that writing a document is similar to writing an email. Paragraphs need no extra tags and lists just need a * or a # mark, and so on. The actual syntax Xilize uses is apparently based on Textile, which I haven’t used myself. Whatever the origin, I was able to get a decent-looking User’s Guide written much effort. Or rather, the effort was in working on the content, which is just plain right.

I then tried creating a PDF for the document. This was interesting because I got to look at a DocBook toolchain once again. Everytime I look that up, man, is it a lot of work to get running. I ended up using the following chain: Xilize formatted text -> XHTML (Xilize) -> DocBook XML (html2db.xsl) -> XSL-FOP -> PDF. The XSL was converted using XSLTPROC. That all worked well, but as you can imagine, it took awhile to figure it out. The downside is that if you don’t like the formatting you end up with, you have to read alot about what formatting is allowed through FOP, which, let me tell you, is a lot. That meant I had a nice PDF but a bunch of work ahead of me before I got a layout that I really liked.

But–shameless plug here–our upcoming release of Flying Saucer now supports rendering to a PDF file, instead of just rendering to a graphics canvas (Java2D), which was our original target. This means I can create XHTML (via Xilize) and let the CSS formatting indicate how the pages should be laid out. We don’t have perfect control over the output, but as CSS was already in my toolkit, it was an easy decision to make. The end result, as PDF is not bad for a few hours writing and a little bit of formatting.

All of this was possible because Xilize is available under the GPL, and what’s more, there are excellent docs for it. I want to stress that the DocBook toolchain is impressive and well-documented, and a lot of people have worked hard on making it work right. jEdit’s docs are written in DocBook, and I think it’s one of the best examples around of how to write documentation for an open-source project. That said, Xilize simply rocks. I’m actually so relieved at not having to struggle with some documentation tool that I’m looking forward to extending our documentation, and seeing where I can help out with other FOSS projects I’m interested in. And, who knows, might be time for a website rewrite of my own…

(originally posted an this JRoller blog entry)

Posted in Uncategorized | Tagged , , , | Comments Off on Xilize!

Navigation Musings

I’ve been looking at several different UI tools and programs for hierarchical information display. Some of these are almost purely graphical, others are purely textual, and others split the difference. Some links:

There are actually a lot of these types of kits and programs available. Some of these have led to secondary tools built on top of the more basic ones–there are a number of extensions to TouchGraph, for example in browsing Google searches results. The utility of these varies depending on what you’re trying to accomplish, of course; but what interests me are the advantages and disadvantages of using certain UI metaphors to grasp and navigate around information.

Take TouchGraph, for example. What I love about this toolkit is that the visual effects seem very fresh (I don’t get bored with them) and the rendering is smooth. You can do neat things like zoom or rotate the graph using sliders (not all the demo/applications written with TG support this), as well as filter out the edges to reduce visual clutter.

But TG also shows the downside of this sort of layout. I find I need to have a certain amount of empty space between nodes, and that I need to limit the number of visual edges, or else the screen becomes too cluttered for me and not very usable. This is due in part to a limiting factor–that nodes are displayed as text strings, which take up horizontal space, and edges that cross each other are too hard to see–so I find I need short text strings, few text strings, and few edges displayed at any one time to get useful information out of the visual model.

The TG-style views don’t represent hierarchies well, to my eye, even if the data source in question (say a website) has, in common use, a hierarchy in place (rooted at the home page). It’s true you can establish a “central” node with related nodes radiating outwards, but this seems to go against the grain for websites that really are organized hierarchically.

FreeMind has the advantage of displaying something more like a tree (though I think it can be made to suggest a graph), and as the nodes are laid out left-right or right-left, there is a clear way to navigate down the hierarchy through eye movement. It also suffers from the drawback of nodes-as-text, and I find that nodes with long text strings just, again, clutter up the view.

Which brings me to another point, which is the purpose of using these things. I’ve used JOE to write outlines of ideas, analysis and tasks lists. It works well for that purpose, and my old favorite, the Word outline mode, has been long ago relegated to less-favorite-son status. It is better than a text editor for that purpose because it enforces the hierarchical structure–you’re editing an outline, and the editor is only an outline editor–and the top-down layout supports longer strings as these can take up the full horizontal width of the screen without negotiating with other elements on the right or left.

But then, with JOE I can’t represent a graph–a node in the outline that has more that one parent node. With WikiPad I think I’m getting the best of both worlds, as each page in the local Wiki can be belong to either a strict tree or a graph, depending on how it’s referenced.

Still, there is something immediately captivating about these graphical views, and I can’t put my finger on exactly what that is. For one, it’s somehow more inviting–I *want* to click on links and move the nodes around. I feel I have some control over what I see and how I best want to see it. The textual approaches of JOE and WikiPad (WikiPad does have a navigator, but it’s a standard tree-style component) put my mind in some kind of box–a box in which I sometimes feel productive and sometimes feel constrained.

But there’s this problem of content. It seems that without some conventions about how to identify nodes, there’s just too much information available in the visual layouts. With Entity-Relationship Diagrams (ERDs), there are some very consistent conventions, and at the same time, there is an important constraint on the structure of the nodes in an ERD, which is that each entity (often a table in an RDBMS) has a name, then columns–so columns can be displayed in a list below the name, forming a naturally rectangular view of the node. That convention is too limiting for general-purpose information hierarchies, though.

The last problem I want to mention is just a UI problem–which is the modal behavior of all this. If you expand the window that holds the TG GoogleBrowser, for example, you can actually fit quite a bit of information on screen. The problem is what you do with it. At the end of the day, being able to navigate nodes and follow a set of links in a cool-looking graph doesn’t help me–I always want to drill-down to the content. And when I do that, then I need to pop out of the graphical mode into some other, like a browser. And that change of context back and forth I find somewhat bothersome.

I love the work people are doing with this, and at the end of the day it’s cool to see the variations in how you can play with this stuff. Check out the links above–the HyperGraph website has a neat built-in navigator applet to move around the website.

(originally posted as this JRoller entry)

Posted in Uncategorized | Tagged | 1 Comment

Things that still work, years later

Just came across the subArctic website. subArctic was apparently a program out of Georgia Tech for a “new” UI infrastructure for Java–back when AWT was the rage. The amazing thing is how many of the demos still work, ten years later, and also how many of the ideas are still pretty cool. Romain Guy needs to look at this stuff :).

One I liked: Rot13 inspector. You have a message encoded in the mysterious ROT-13 code–you drag a transparent window over it and the code is translated in the overlapping portion. 

A number of these need to be recompiled using new APIs–there are security exceptions and browser issues that prevent them from running.

I don’t know much about the project or what happened to it, but kudos to them for trying new ideas out.

Update: Sad to say, looks like the subArtic website has been taken down. You can still find old links via the Internet Archive–go to http://archive.org, then search for the URL http://www.cc.gatech.edu/gvu/ui/sub_arctic and go from there. Not sure if all the source code and samples are easily available, though. (10/12/2008)

Posted in Uncategorized | Tagged | Comments Off on Things that still work, years later

Rich clients look like the future (to me)

I’ve been thinking about rich clients apps (RCA) versus HTML client apps (HCA), and why I tend to like rich clients better. The RC app that most impressed me in the last few years was iTunes; for an HCA, it would probably be this German website to look for rooms for rent. The world has certainly shifted towards HCAs in a big way. It’s not that people aren’t writing RCAs, or that we don’t use them all the time, but you’ll find a lot more of your new work in HCAs these days than in RCAs. But I tend to prefer RCAs, so the question is, what’s going on here?

RCAs have one general killer aspect to them: they are context-specific. By that I mean that, despite following the general GUI guidelines of their respective platform, they tend to be focused on one particular problem, or one particular task set that one needs to address. iTunes deals with music: downloading it, organizing it, playing it. There are secondary features, like burning CDs, but the main thing is I can see my music as individual files, listed by name, band, album, and I can work with the files in a way that seems natural to me. In fact, I don’t even think of them as files–I don’t see file names, or file sizes–I see my music as I would expect songs to be shown to me–there are song titles, album name, band or singer, length of the song (in minutes). I work in a context that is music-specific.

An RCA can be context-specific for two reasons, I think. One is that the toolkit(s) for writing RCAs are much richer than those in HTML. HTML has a very limited set of GUI controls, formatting and layout commands, and programmability. RCAs have not just the default toolkit for the operating system, but any new control you can imagine. 20 years ago you might have adjusted the volume using a spinner; then you could use a slider; now you can use a “knob” that you “turn”. And you can do this because there is a low-level set of constructs that the operating system provides to not only control what you’re rendering, but also to interact with input (mouse, KB, tablets) and to communicate with your program. What this means is that your interface can be specific to a particular task not only in what it accomplishes (managing music) but in how you interact with it. For a music program, a slider control on volume is great. For a zoom feature in a text editor, a spinner might be better. This control over the interface enhances the user’s illusion that they are working in a specific context.

Secondly, an RCA can be context-specific because that’s how we understand RCAs in this phase of technology. We expect websites to have a navigation bar, menu, or side bar, some main content, some images, and that’s about it. While an RCA can stick to the rules and also have a menu bar, toolbar, and main panel, it doesn’t have to–and we are OK with that. We get used to switching interface metaphors when using different RCA apps. This probably limits the range of the apps we can work with successfully, but on the other hand, means that the interface in each case can be targeted to the problem that app is trying to address.

HTML, on the other hand, offers no great flexibility in interface controls, nor do we expect that websites will present drastically different UIs for different purposes. This is probably the great strength of HTML as well–it supports a very simple, limited feature set. Those limitations both allow you to learn it quickly as well as to build new sites quickly, or modify the existing ones with less work than would be the case for an RCA.

There is a lot I am skipping over, of course. HTML is delivered live (unless cached somewhere), so the UI can be updated with zero deployment cost. HTML can use CSS for layout and formatting, scripting for programmability, and supports some limited plugin extensions (like Flash, applets, etc.). It doesn’t have a compile cycle. Most RCAs on the other hand, are written in some programming language that has a compile cycle, use complicated toolkits for building the UI, have costly deployment, etc.

But at the end of the day, I’m getting pretty sick of the limitations of HTML. I’m sick of not having truly context-specific “web applications”. It’s true that it’s great to have universal access to my email, to my bank and to the news. That much I like. But I can’t imagine writing any serious document on a web page, even if I’m doing it in an editor applet of some kind. Too many things are missing; I can’t seem to leave the context that websites provide, automatically, in the background.

There is some hope for the future, in that a number of groups are working on making RCAs easier to deploy and easier to develop. Apple is coming out with a desktop-based HTML/JavaScript toolkit for small mini-applications. Microsoft is pushing XAML. Mozilla has XUL and JavaScript, and Sun is starting to focus on making Swing easier to use, and many Java FOSS developers are starting to play with XML as a way to store layouts.

My guess is that Apple is looking in the right direction with iTunes, in that the lines between client and web are blurred. When you browse the Music Store, you don’t have to leave iTunes to use a browser: it appears in the context of the iTunes app, and the layout looks almost exactly like the rest of the app, despite the fact that the Music Store is online. On the other hand, their concept of updating iTunes is old-fashioned: download an installer, run it, restart iTunes. Apps that can download patches and update themselves live, and that support plug-and-play plugin extensions seem like the future.

Lot’s more to say, but this is getting long.

(originally posted as this JRoller entry)

Posted in Uncategorized | Tagged , , | Comments Off on Rich clients look like the future (to me)

Unit-testing Performance

Perfomance regressions: I was wondering this morning if it might make sense to write unit tests to verify performance of specific operations. The basic idea is that we have a unit test that prepares the run and executes a certain operation; a wrapper test then executes this performance test a specific number of times. Timing is captured in milliseconds for total, average, min and max execution time.

The unit test for a single execution of the process controls the non-environment specific inputs to the operation–e.g. input data and parameters. We can say that would be fixed. What the performance harness controls does is capture the timings in a standard way, and then compares these to pre-defined performance statistics–the expected cost of the run.

The issue would be, how do we control for environment-specific factors? There is memory available, CPU speed, operating system (and version), cache/swap allocated to the OS, JVM parameters, and so on. My idea is that these values are isolated into a performance profile. When setting up the test for the first time, you create a base profile by running the test in a “record” mode. The record mode captures all the environment details available to the JVM, which includes, as it turns out, things like the operating system, the JVM version, amount of memory assigned to the JVM, and so on. We might even capture the workstation’s name, if it has one. For this configuration, the profile is written out (with all those details), and the timings recorded under that profile. When running the actual test, the harness looks for a matching profile, compares the test results with the expected performance, and reports deviations.

Of course, we don’t want to match on exact timings, since there will always be factors we can’t control. But let’s say we have “deviation ranges” which are allowed–10-15% longer is counted as a warning, > 15% is counted as an error. 10-15% less is counted as a bonus, but if the test runs takes < 20% of the expected time, that would count as a warning--namely, that something suspicious was going on (perhaps we had accidentally turned off a processing step).

In the Flying Saucer XHTML/CSS renderer project, we have a “brutal” test we run manually, where we load and render the entire text of Hamlet. While many smaller pages render in less than a second, Hamlet is guarranteed to take much longer, up to half a minute or more. But this test must be run manually and eyeballed to see what degredation, or improvement, we’ve found.

Note that, for Flying Saucer, we’d also need to control the configuration file we load the application with–in our current configuration, you can choose which XML parser to use when loading pages into memory.

So what interests me is how accurate and reliable this would actually be. Which seems to come down to: how accurate can I make the profile? Even running on exactly the same machine, there are many factors that could cause the JVM itself to run more slowly–for example, having a different set of applications running at the same time, or background services active. On the other hand–isn’t it worth a try? Getting an idea–without extra effort–of how much a set of changes to the codebase has affected performance seems like it’s worth it.

(originally posted on this JRoller blog entry)

Posted in Uncategorized | Tagged | Comments Off on Unit-testing Performance

Utility Servers in the JVM

Utilities Servers: I’ve been thinking more about Nailgun, the server which hosts small Java programs to be run from the command line. One question is, why can’t (or don’t we) use Java more often for common utility tasks? Think of all the command line utilities written as executable (binary) or script files. If we didn’t have the startup time of using the JVM, would we be any better off using Java for these purposes?

Let’s assume that, automagically, we could launch any Java utility with zero startup cost. Nailgun is one approach–the runnable classes are loaded by a server, which keeps them in memory (though could probably swap out the least-used ones). The classes are executed by a small command-line executable written in C. Since the server is started once and remains resident, we just absorb the cost of launching the C executable (minimal), plus the (local) network call to request the class be executed (Nailgun uses a small custom protocol for this). Anyway, let’s suppose that using Nailgun or a similar approach we could reduce the cost of execution to be close to the cost of running the same utility written as a binary executable. I admit this is just a theoretical possibility, and won’t comment on how realistic it is.

I’m going to call Nailgun one implementation of a “Utilities Server”, which is a server process that runs small “hosted utility” commands on demand from a local client, requested over TCP/IP. A Utilities Server is a memory-resident multi-threaded server that executes in a JVM, and can run any utility that is written in a JVM bytecode (or that can be compiled to do so).

So, some advantages

  1. Lots of APIs: We have access to all the Java APIs, from encryption to image manipulation.
  2. Dynamic Loading: We can reload our utilities at any time–use a custom classloader to check the file’s timestamp, or force a flush/reload on demand.
  3. Scripted or Compiled: We can run any languages the JVM supports (like BeanShell, Jython, JRuby) or compiled Java classes.
  4. Low-overhead IPC: We can pipe data between different hosted utilities using direct, in-memory data streams without returning to the OS in-between.
  5. Security Manager: We can define one or more security managers to control exactly what our hosted utilities can do. We can also use reflection on loading a class to check for disallowed package access (e.g. uses of java.net.*).
  6. Swappable Implementations: Using dynamic classloading, we could swap in different implementations of the same API, or similar, but different APIs for a given purpose: use different XML parsers on demand, or switch between GNU/Perl/Java regexp packages.
  7. Colorful RMI: Remote method invocation, however you like it: Java RMI, Corba, SOAP, HTTP/REST, whichever.
  8. Auto-tuned Performance: If we are using a self-optimizing VM like Sun’s HotSpot, our utilities will gain speed as they are executed more often.

Those are just off the top of my head. Now, some downsides:

  1. Piping: The common practice of chaining together command-line utilities using the pipe command becomes a little weird here, as any invocation of a hosted VM utility will have to route through our remote protocol. Will it be much slower because of this?
  2. Invocation: Need an easy way to invoke utilities without having to write a wrapper script to do this for us. Nailgun, for example, has a command-line executable called
    ng

    , which takes as an argument the hosted utility to run. Would be nice to avoid this and use the utilities as if they were, for example, Bash scripts.

  3. Opacity: Using a hosted VM adds a layer of indirection–where is our utility loaded from? How do we edit it and load a new version? What if two versions are available? For command-line utilities, these (my guess) would all be stored together in a directory of binaries or of scripts; in Java, these can come from a directory, a Jar, a ZIP file, etc.
  4. Security Confusion: Execution of scripts or binaries from the command-line is normally controlled through file access permissions. Java has its own security mechanisms–can get confusing if we have to mix-and-match these, as changing permissions for one set of utilities is orthogonal to changing them in the other.
  5. Management Confusion: As with security/execution permissions, we have all sorts of tools for monitoring, backgrounding and controlling processes that won’t work using a separate utilities server.
  6. Bad Behavior: How do we stop or kill utilities that are chewing up lots of memory or CPU cycles? From normal utilities, we just kill the process. How often would we need to kill the whole utilities server?

Still what I’m thinking is that this idea of running a “utilities server” is, overall, a pretty good idea. I like the fact that I could download a signed/trusted Jar with a bunch of these utilities written in Java or in a JVM language, and have them available as needed. I like that, given the OO patterns that allow us to write adaptors and plug-in implementations of APIs, I can change how my utilities work at runtime. And, given the huge amount of code available in the FOSS/Java world, the range of functionality we have available is pretty awesome. Glad that Nailgun is giving us a model for how to do this.

(originally posted on this JRoller entry)

Posted in Uncategorized | Comments Off on Utility Servers in the JVM