Fighting Obsolescence: t’salon

t’salon

look over my shoulder

My friend Ed Tanguay said to me by email last week that “we all feel we are in danger of growing obsolete at the rapid changes in our industry”. By “we”, he was referring to programmers and people in related careers in the IT industry.

I don’t want to start a long rant about how rapidly things are changing in IT. It’s not clear to me how much my perception of the rate of change is related to my getting older, but it seems that doesn’t account for all of it, anyway.

The interesting question is what to do about that our fear of becoming obsolete in the tech. industry. The answer, Ed proposed, is to learn, learn, and learn yet more. Set up a lifestyle, goals, priorities, that encourage learning about all things tech-related. Moreover, be open to hearing about what is important, and new, from unexpected places: areas you don’t work in, using tools you don’t know. Don’t just learn more of what you already know – seek out and get exposed to things you don’t already know, surprise yourself, and keep pushing the boundaries of what you know back.

In that spirit, Ed and I have started a sort of local get-together, which we call the Tech Salon, (I’m starting to call it t’salon, which sounds neater).The t’salon is, at the moment, an invitation-only group of geeks, including programmers as well as hardware hackers, both professional and amateur, people who are paid to do this work and people who do it to have fun or to help family and friends. In our monthly meetings there is one rule: everyone must show something before the evening is over. It’s show-and-tell for geeks, as Ed puts it. Each person decides what they show, like what they’ve

  • built, like hardware
  • written, like software
  • found, like an open-source project
  • bought, like a gadget, new phone, tablet
  • read, like a new book you want to talk about
  • visited, like a conference or workshop or co-working space

This is not about PowerPoint presentations; someone just says, “Oh, wait, I’ve been working on…” and then the rest of us move to that side of the room, stand around this guy’s armchair and look over his shoulder as he shows something off. It’s informal, unscripted, and loose. In between people just talk shop, like geeks do when you put them in a room together.

The main point is to leverage each person’s enthusiasm for what they love in order to get the rest of us motivated to learn. When you look over a geek’s shoulder as they talk excitedly about their current home hacking project, you get excited too, and you pay more attention. And whatever has kept you from learning more is, I think, pushed aside as you catch that whiff of passion and joy that’s bubbling up in front of you.

Learning in our careers is not just about slogging through another 1,000 page tome on Enterprise Best Practices or research papers on distributed systems. I know that I got started with computers because it was fun, I was excited, and I couldn’t wait to spend more time with the machine when I was away from it. That flame of passion and desire is what we are inviting into our t’salon every month, and we hope it will inspire the people who show up to push themselves to try something new. Even if they don’t try something right away, at least they will be exposed to something new, which is good in and of itself – it let’s us know what’s over the horizon, waiting for us, as soon as we have time to head over there.

Right now the t’salon is invitation-only, but we are planning to open it up as soon as we feel the momentum is there and we have a core group of attendees established. But if you think it’s a good idea, by all means, fork and go to it.

Posted in Uncategorized | Tagged | Comments Off on Fighting Obsolescence: t’salon

Publinksky, a Chrome Extension; Second Thoughts

A year ago, I heard from a colleague about a competition Google was hosting to develop new extensions for the Chrome browser. Google had recently released the API to develop extensions, and they thought the competition might jump-start the process of building up a library of extensions from programmers outside of Google.

My friends and co-workers Pierre Heinzerling, Uwe Janner and I wrote an extension called Publinksky, which we published in mid-March 2010. We promptly lost. That was OK. For my part, I got to experiment some more with the Google App Engine (GAE), the Java Jersey (JAX-RS) library, the GSON Java to JSON library, the GAE data APIs, and JavaScript. Pierre got to work with some JQuery, and Uwe focused on the authentication portion of the Google APIs.

What did Publinksky do? Read the Fine Manual.

Some thoughts, a year later.

  • It’s very easy to write a browser extension for Chrome. It’s also very easy to write a no-cost, hosted, server-side backend for the extension, assuming you need it. It takes minimal effort to get started. We should all be surprised by this, and by how unsurprised we actually are.
  • The Chrome extension APIs appear quite stable. I was able to reinstall the plugin just now, many months (6? 9?) after last testing it, with multiple Chrome updates in between. Nice job.
  • I found writing any substantial amount of JavaScript slow, as I’ve lost familiarity with the language, but more than that, the lack of any compiler support led me to make stupid errors over and over again which were only caught at runtime. Blame this on working too heavily with statically-typed languages for a long time now. However, IntelliJ IDEA’s JavaScript support did its best helping me find errors early.
  • The Java server-side libraries have gotten better, easier to use, and slimmer conceptually. It took me almost no time to get a REST interface set up with the Jersey (JAX-RS) library, and making classes persistent was straightforward using annotations and the GAE persistence APIs.
  • Google did a good job with the Google App Engine. Our server-side app is still running, 339 days after launch. I haven’t touched it since I last uploaded it. I can log in and see all the usage stats, data, manage versions, and so on, in one place. This is exactly as it should be and that means Google did it right.
  • Developing multi-tier applications is not any easier, or much fun. In part this had to do with the fact that we were using essentially a beta release of the Chrome extension support. There’s a lot of starting and restarting, loading and reloading, and deploying and related nonsense associated with writing even a simple distributed application of this kind. The tools make this much easier than in the past, but in many ways it still feels like too much busy work. If I take this tool chain up again I will focus on some yak-shaving and work on preparing the tools to save me some time.
  • Making a useful extension within the security sandbox is difficult. It seems difficult to write a useful browser extension that doesn’t require the user agree to a fairly liberal privacy policy. Our extension wanted to share interesting URLs with the home server so that friends could visit and comment on the links. To do this the user has to grant access to “your data on all websites” and “your browsing history”. Why? I believe “data on all websites” is so that using the extension, you could highlight a piece of text, then submit it along with the link. Making the clipboard available to the extension required this permission. And access to the browsing history was so that the extension could highlight links you had already visited. The fact that such broad permissions were required for fairly standard functionality makes me think this privacy model is simply broken. Why should you give such broad permissions to an extension written by someone you don’t know?
Posted in Uncategorized | Comments Off on Publinksky, a Chrome Extension; Second Thoughts

Paradigm Shifts: January/February 2011

  • Nowadays I mostly just type phrases in for Google searches, like “hosting static content on github”. No more reformulating my queries in a semi-string “searchese” language. I actually go out of my way to write (mostly) grammatically correct English phrases to use as my search string.
  • In cooking, I’ve become used to measuring by weight, common in Europe, rather than by volume, which is common in the US. However, a majority of websites list recipes with measurements by volume – and the new tool for me to use is WolframAlpha. I type in the ingredient exactly as listed, e.g. “1/2 cup oats” and get WA to show me how much that is in grams. Surprisingly WA also usually identifies the ingredient correctly. Never thought I would use a “computational engine” for baking, but there you go.
  • Surprising to me that JavaScript is being used as a server-side programming language; here’s an interview with one team on why they chose Node.js over the alternatives. I see Node as relying on cooperative multitasking – one event loop, any single process in the queue can block the event loop – which I thought was an idea we’d largely abandoned as a design. But maybe I was wrong in thinking that way.
  • Eye-opener of the day, JavaScript used as host for SmallTalk: http://www.silversmalltalk.com/, not sure what version, but, wow. I keep hearing people say that JS will be the ultimate target language as JS runtimes are now installed, basically, “everywhere”; projects like this are a great inspiration.
Posted in Uncategorized | Tagged , , , | Comments Off on Paradigm Shifts: January/February 2011

Interview with Alan Kay

I came across a series of interviews on YouTube with Alan Kay, who co-created Smalltalk at Xerox PARC, as well as Squeak. The interview is in very good quality both in picture and sound, and Kay is given extensive time to give the background for his work with Smalltalk and Squeak – which was originally driven by a desire to take children’s education to a whole other level.

The interview is split into 7 parts. Here’s part 1 – for the rest, visit the playlist on YouTube.

Posted in Uncategorized | Tagged , , , , | Comments Off on Interview with Alan Kay

40 years and going strong

InfoQ has made available a video of a talk, “Forty Years of Fun with Computers”, by Dan Ingalls, who co-created Smalltalk at Xerox PARC, Squeak a few years later, and most recently, the Lively Kernel, a programming environment which runs within a web page. He’s one of those inventors and programmers whose accomplishments astound me, and who remains utterly humble about what he’s achieved.

The presentation includes some very cool demos of things they pulled of in Squeak and Lively Kernel. Don’t miss the part with the piston and the piano.

A must-see.

Posted in Uncategorized | Tagged , , | Comments Off on 40 years and going strong

On the line: 210111

InfoQ has become a pretty good resource of late for videos of conference presentations and talks, as well as interviews.

The collection and archives on InfoQ is worth browsing, but I wanted to highlight some really good videos that have shown up in the last week or so

Billy Newport (from IBM) presentation on NoSQL : In this talk, Newport contrasts the strengths and weaknesses of NoSQL databases with traditional relational databases based on his experience on actual customer systems using one or the other. I like how he addresses drawbacks of NoSQL which I don’t hear highlighted often enough – the lack of a central schema, for example, and the restriction to, essentially, by-key retrieval as opposed to ad-hoc search. He also points out limitations of relational systems at large scale, and how people are trying to get around this.

Cliff Click (Azul Systems) interview, on HotSpot JVM, pauseless garbage collection, and the Managed Runtime Project : In the 1990’s, Cliff Click was hired by Sun to help them write a new JIT for their JVM after Sun purchased a startup company that had developed a new, powerful Smalltalk VM called Strongtalk. Here, Cliff talks about the current work being done at his present company, Azul Systems, on “pauseless garbage collection” in Azul’s fork of HotSpot, Azul’s plans to give their enhancements back to the community, the problems some non-Java dynamic languages face on the JVM, and other topics.

Guy Steele presentation on language support for parallel programming : Guy Steele, co-creator of the Scheme programming language, talks about how a programming language might offer automatic support for parallel processing, including experimental work in that direction on the new language he’s been working on, Fortress. His talk starts with an unexpected, and truly masterful, review of a small program he wrote in assembly language around 40 years ago, which he reverse-engineered from punch cards he still had in a desk somewhere.

Stuart Halloway presentation on Clojure and Java interoperability : Stuary Halloway does a presentation on how Clojure interoperates with Java. I find Stuart to be a very good presenter: he moves along at a good clip, his slides presentations are readable and the content is helpful, and he has a pragmatic attitude about Clojure. Personally, I’ve found all his talks and slide decks useful in learning Clojure.

Posted in Uncategorized | Tagged , , , , , , | Comments Off on On the line: 210111

On the line: 020809

  • Got a copy of Masterminds of Programming, which has a silly title, but which includes pretty thorough, and very interesting, interviews with a number of designers of well-known programming languages, including: C++, Objective-C, ML, Haskell, Java, C#, BASIC, Awk, Perl, Eiffel and others. I find it fascinating how different these people are and how varied their views about programming and computer science are. Gives me hope, somehow, to see that there is a lot of healthy diversity in the field of language design, and lots of room for different views and approaches. Worth a read.
  • Also picked up Beautiful Architecture, which has essays from developers of various largish computer systems, including Facebook, Project Darkstar, JPC, Jikes JVM, Eiffel, Xen, KDE, and GNU Emacs, among others. Not much to say yet, as I’m just digging in.
  • The DaVinci Machine project continues to plug along, and Attila Szegedi posted to the JVM Languages Group that he’s made a build of the latest DaVinci JVM available for download, if you’re using OS X. This is a nice, helpful, contribution and hopefully will make life easier for people who just want to try it out without fussing with Mercurial changesets and an (occasionally) complex build setup. Attila, by the way, is one of the maintainers of the Rhino JavaScript engine for the JVM, as well as a proponent of an interoperability mechanism among JVM languages (the so-called meta-object protocol, or MOP).
  • For some reason, there’s some work on Groovy working well with Clojure–see this post by Andres Almiray on invoking a Clojure script from Groovy. Very clean, kudos.
  • Charles Nutter got JRuby working on Android again, whereas another guy is working on a dialect of BASIC for Android, called Simple (which is open-source). Interesting to see where this will go–I assume we’ll see more and more languages targeted at Android over time, and wonder how that will affect the experience for app developers.
  • JetBrains has released version 1.0 of the Meta Programming System (MPS), which, for me at least, failed the tutorial test–even building a small language to calculate values was a grind through one after another of abstract, foreign concepts. The bulk of it appears to be open-source. I think the most interesting idea is that we (somehow) get away from restricting our language designs based on the limits of our ability to parse symbols from text files, and use a language modeling tool instead, which builds editors for us.

One of my favorite interviews from Masterminds of Programming was the one with Tom Love, who co-developed Objective-C with Brad Cox. Tom is funny and insightful, though, oddly enough for a book about programming languages, he focuses less during the interview on language design itself and more about the problems of writing and maintaining large code bases, team and project management, and so on. Favorite quote from the interview:

“I’m currently working with a government client that has 11 million lines of code, some of which is 25 years old, for which there are no test cases. There’s no system-level documentation. They even stripped out the comments in the code as an efficiency measure a few years ago, and it’s not under configuration control, and they issue about 50 patches per month for the system and have been doing so since 1996. That’s a problem.” Tom Love, Masterminds of Programming

Posted in Uncategorized | Comments Off on On the line: 020809

On the line: 240709

Posted in Uncategorized | Comments Off on On the line: 240709

On what Open Source is, and is not

(accidentally deleted this post; re-adding under the same date)

Just found this post on a discussion forum for an open source project called HandBrake. The author is one of the HandBrake developers, as far as I can tell. A must-read. To quote:

I am aware of no open source software either currently or previously available that catered to the needs, whims, or desires of end-users. That isn’t what it’s about. If you want the freedom to tell someone what you want and expect them to do it, that’s called commercial software, where you make your intentions known with your purchasing decisions and vote with your wallet. That is not open source.

At some point I’d like to write about my experience contributing to FOSS projects. It’s been interesting and educational to have had the experience on both sides of the equation, as user and as developer.

There’s another angle which the HandBrake forum author doesn’t touch on, which is use of open source software within a company. I’m currently working at a gig where open source libraries are used exclusively. There simply is no commercial software in sight for the main project we’re developing, though internally there are some groups that use Windows and there are licenses for those. What is interesting is to see how little interest the technical leadership has in actually contributing back to open source projects we rely on. It’s simply never discussed and the couple of times I suggested specific steps–for example, submitting a set of Maven build files we’d developed to compile a library we rely on–the response was lukewarm. I don’t know what the solution is, but it certainly seems odd that nowadays you can build a whole software product line without ever paying a dime or even contributing back to the process in any way.

Posted in Uncategorized | Comments Off on On what Open Source is, and is not

Avoiding the Next Big Language

I was just listening to a podcast introducing the F# language, and what struck me was that the two folks being interviewed–Chris Smith and Dustin Campbell–didn’t talk at any time about F# being the next big language for the CLR. Rather, they focused on what they thought the advantages were, and invited people to just try it out, in fact, to just continue using whatever language they were already using on the CLR, but to be open to using F# when it seemed called for. You can find the podcast here.

I contrast this with what I’ve been seeing in various discussion forums around the web–the ones for Java and JVM users seem to be the biggest culprits here–where selecting a programming language seems to be a winner-takes-all proposition. It seems to me that, at least from a marketing angle, it’s better to promote your own language (or the one you currently like) as an alternative, rather than the one which everyone should pick up–must pick up, in fact–after dropping whatever language they are currently using.

Competitions is useful in the market of ideas. Just yesterday, Jonas Bonér posted a great presentation on approaches to concurrent and parallel programming in the JVM. What’s clear from that talk is that in that area of investigation there is not only competition, but also room for experimentation. There’s even room for more than one implementation of a given idea, such as Actors. We saw this some years ago with dependency injection/inversion of control and O/R mapping, among others. And it’s clear that while there are winners (e.g. Spring for IOC and Hibernate for ORM) there is still room for other approaches even over the long term.

On the JVM we do see some people promoting a mix of languages to solve problems–for example, JRuby on Rails calling into Java libraries, or Groovy used as a build tool for regular Java applications instead of some XML-based build tool. But then, if we really believed that we could just mix-and-match, I doubt we’d be seeing the long, heated, rather anguished discussions about “which language to use”.

It seems to me on the one hand that for a certain class of organization it’s very important to have a single, mainstream language used for the bulk of programming tasks. I’ve worked at that sort of IT shop before, and I can see the value in that approach. They have a known quantity, their developers can move between projects more easily, it’s a straightforward proposition to find programmers skilled in a mainstream language, etc. That sort of environment also seems to favor a number of secondary aspects around language selection, including language and library stability, technical support, documentation, training and educational materials, and backwards compatibility during upgrades to the language and libraries.

I’ve also worked at shops on the other end of the language-selection spectrum which are willing to combine whatever set of tools got them to their end goal. They expect their programmers to be comfortable with a pile of programming tools, the same way we’d expect a carpenter to work with both power and hand tools. One gig that comes to mind relied on a combination of Perl, Lua, Java and C to get the job done, with a healthy amount of scripts written in Bash, Awk and Perl for admin tasks.  It wouldn’t have been a stretch for them to add, say, Python and Ruby to that mix. They were, as far as it goes, language agnostic, though I wouldn’t say they were picking up new languages just for the heck of it. It’s just that they saw the value in each individual language and had no policy enforcing the single-language principle.

One group of programmers is of the opinion that there we in the IT industry will in the future no longer settle on one or two “mainstream” languages for, say, business applications. Rather, we will gradually move towards a model where a number of languages are used in concert, with each one applied to solve part of the problem.That seems counter-intuitive to me, as I’m not sure what fundamentals in the industry have changed; we certainly have had a plethora of stable, useful languages available for the last 30 years, and yet it seems that just a small handful rise to the top of the list at any given time. What would drive a move towards real polyglot computing?

Another group seems to feel that just as was the case in previous generations, we will run into the lack of flexibility in the current mainstream choices, for example Java or C#, and will find another one (or one+) to take its place. That generational change may be driven by external demands which the current crop of languages simply cannot fully or easily address, for example, the rise of the manycore system.

What I believe–and the reason I’m writing this–is that it’s counter-productive to be talking about the Next Big Language. If there is a NBL, believe me, you’ll know it, either because you jump on that bandwagon early and make a bazillion bucks or because you ignore it and find your job prospects shrinking faster than your worth in the marketplace. If, for economic reasons you want to make sure to be an early adopter, well, in this industry you have a lot of ground to cover, and likely you’re no better as a fortune teller than anyone else. In other words: wait for it, it will come.

I think it’s more interesting to talk about what different programming languages–big or not–allow me to do that those in my current toolset don’t. This podcast on F#, I think, took the right approach in just encouraging curiosity and interest in a different way of doing things.

Posted in Uncategorized | Comments Off on Avoiding the Next Big Language