Interesting 2009 ACM SIGs

Sometimes people ask me whether or not an ACM membership is “worth it”. The special interest groups are one reason to join. Here are some that look interesting:

  1. SIGAPP: Applied Computing
  2. SIGARCH: Computer Architecture
  3. SIGART: Artificial Intelligence
  4. SIGCAS: Computers and Society
  5. SIGCOMM : Data Communication
  6. SIGCSE: Computer Science Education
  7. SIGDOC: Design of Communication
  8. SIGIR: Information Retrieval
  9. SIGITE: Information Technology Education
  10. SIGKDD: Knowledge Discovery in Data
  11. SIGMOBILE: Mobility of Systems, Users, Data & Computing
  12. SIGMOD: Management of Data
  13. SIGOPS: Operating Systems
  14. SIGPLAN: Programming Languages
  15. SIGSAC: Security, Audit & Control
  16. SIGSOFT: Software Engineering

Sugar went on a diet

In one of the recent builds of Sugar for the OLPC XO, I found that “out of the box” the machine barely has enough memory to run programs. Tonight I installed the most recent operating system version, 8.2.0, and found the amount of free memory to be drastically increased:

Total MB Used MB Free MB
Sugar 8.2.0 230 162 67
Sugar Previous 232 220 12

With 55MB more free memory, I expect this machine to play a lot nicer with its users!
I used

free -m

to get these measurements.

Cross-compiling Java to Objective-C for the iPhone

It seems that you can cross compile Java to Objective-C on the iPhone. I didn’t dig any deeper than that.
Here is the blurb for the technologies used:

The goal of XMLVM is to offer a flexible and extensible cross-compiler toolchain. Instead of cross-compiling on a source code level, XMLVM cross-compiles byte code instructions from Sun Microsystem’s virtual machine and Microsoft’s Common Language Runtime. The benefit of this approach is that byte code instructions are easier to cross-compile and the difficult parsing of a high-level programming language is left to a regular compiler. In XMLVM, byte code-based programs are represented as XML documents. This allows manipulation and translation of XMLVM-based programs using advanced XML technologies such as XSLT, XQuery, and XPath.

Apple’s iPhone has generated huge interest amongst users and developers alike. Like MacOS X, the iPhone development environment is based on Objective-C as the development language and Cocoa for the GUI library. The iPhone SDK license agreement does not permit the development of a virtual machine. Using XMLVM, we circumvent this problem by cross-compiling Java to the iPhone. Just like a Java application can be cross-compiled to AJAX, XMLVM can be used to cross-compile a Java application to Objective-C. The cross-compilation is also accomplished by mimicking a stack-based machine in Objective-C. Consider the instruction (integer remainder) that pops two integers off the stack and pushes their remainder after division back onto the stack. Using the following XSL template, the instruction can be mapped to Objective-C.

(via the PLT Mailing List)

What is all the fuss about how you can write DSLs in Lisp?

In this post on the PLT discussion list I asked:

What is all the fuss about how you can write DSLs in Lisp?
Everyone from thought-leaders to blog-posters to grandma’s are talking
about how Lisp is so great for DSLs.
About what are these people talking about? Because no one of said
people actually elaborate on any of this, of course, which leads me to
question their claims.

jrm provided the following excellent reply (mirrored here):

On 6/7/07, Grant Rettke  wrote:
That said, when I think of a DSL think about letting folks write
"programs" like:
"trade 100 shares(x) when (time < 20:00) and timingisright()"
When I think about syntax transformation in Lisp I think primarily
about language features.
In order to talk about domain-specific languages you need a definition of what a
language is.  Semi-formally, a computer language is a system of syntax and
semantics that let you describe a computational process.  It is characterised
by these features:
 1.  Primitive constructs, provided ab-initio in the language.
 2.  Means of combination to create complex elements from the primitives.
 3.  Means of abstraction to control the resulting complexity.
So a domain-specific language would have primitives that are specific
to the domain
in question, means of combination that may model the natural combinations in
the domain, and means of abstraction that model the natural abstractions in the
domain.
To bring this back down to earth, let's consider your `trading language':
 "trade 100 shares(x) when (time < 20:00) and timingisright()"
One of the first things I notice about this is that it has some
special syntax.  There
are three approaches to designing programming language syntax.  The first is to
develop a good understanding of programming language grammars and parsers and
then carefully construct a grammar that can be parsed by an LALR(1), or LL(k)
parser.  The second approach is to `wing it' and make up some ad-hoc syntax
involving curly braces, semicolons, and other random punctuation.  The third
approach is to `punt' and use the parser at hand.
I'm guessing that you took approach number two.  That's fine because you aren't
actually proposing a language, but rather you are creating a topic of
discussion.
I am continually amazed at the fact that many so-called language designers opt
for option 2.  This leads to strange and bizarre syntax that can be very hard to
parse and has ambiguities, `holes' (constructs that *ought* to be expressable
but the logical syntax does something different), and `nonsense'
(constructs that
*are* parseable, but have no logical meaning).  Languages such as C++ and
Python have these sorts of problems.
Few people choose option 1 for a domain-specific language.  It hardly
seems worth
the effort for a `tiny' language.
Unfortunately, few people choose option 3.  Part of the reason is that
the parser
for the implementation language is not usually separable from the compiler.
For Lisp and Scheme hackers, though, option 3 is a no-brainer.  You call `read'
and you get back something very close to the AST for your domain specific
language.  The `drawback' is that your DSL will have a fully
parenthesized prefix
syntax.  Of course Lisp and Scheme hackers don't consider this a drawback at
all.
So let's change your DSL slightly to make life easier for us:
 (when (and (< time 20:00)
                 (timing-is-right))
    (trade (make-shares 100 x)))
When implementing a DSL, you have several strategies:  you could write
and interpreter for it, you could compile it to machine code, or you
could compile it
to a different high-level language.  Compiling to machine code is unattractive
because it is hard to debug, you have to be concerned with linking,
binary formats,
stack layouts, etc. etc.
Interpretation is a popular choice, but there is an interesting
drawback.  Almost all DSLs have generic language features.  You
probably want integers and strings and
vectors.  You probably want subroutines and variables.  A good chunk of your
interpreter will be implementing these generic features and only a
small part will
be doing the special DSL stuff.
Compiling to a different high-level language has a number of
advantages.  It is easier to read and debug the compiled code, you can
make the `primitives' in your
DSL be rather high-level constructs in your target language.
Lisp has a leg up on this process.  You can compile your DSL into Lisp
and dynamically link it to the running Lisp image.  You can `steal'
the bulk of the
generic part of your DSL from the existing Lisp:  DSL variables become Lisp
variables.  DSL expressions become Lisp expressions where possible.  Your
`compiler' is mostly the identity function, and a handful of macros
cover the rest.
You can use the means of combination and means of abstraction as provided by
Lisp.  This saves you a lot of work in designing the language, and it saves the
user a lot of work in learning your language (*if* he already knows
lisp, that is).
The real `lightweight' DSLs in Lisp look just like Lisp.  They *are* Lisp.  The
`middleweight' DSLs do a bit more heavy processing in the macro expansion
phase, but are *mostly* lisp.  `Heavyweight' DSLs, where the language semantics
are quite different from lisp, can nonetheless benefit from Lisp
syntax.  They'll
*look* like Lisp, but act a bit differently (FrTime is a good example).
You might even say that nearly all Lisp programs are `flyweight' DSLs.

Warp Speed Introduction to CLOS

[The Warp Speed Introduction to CLOS] is an extremely short overview of the Common Lisp Object System (CLOS). Newcomers to CLOS are often intimidated by its apparent complexity and possibly confused by its unusual approach. For a long time I considered CLOS to be “too hairy”, but after working on a large project that made heavy use of CLOS, I found that CLOS is actually one of the simplest and easiest object systems to understand and use. The more complex parts of CLOS can be safely ignored.

(via jrm)