www.archive-com-2014.com » COM » E » ECHO-FLOW

Choose link from "Titles, links and description words view":

Or switch to "Titles and links view".

    Archived pages: 122 . Archive date: 2014-10.

  • Title: Yet Another JavaScript Blog
    Descriptive info: .. Yet Another JavaScript Blog.. MODELLING AND SIMULATION, WEB ENGINEERING, USER INTERFACES.. October 7th, 2012.. New projects: scxml-viz, scion-shell, and scion-web-simulation-environment.. I have released three new projects under an Apache 2.. 0 license:.. scxml-viz.. : A library for visualizing SCXML documents.. scion-shell.. : A simple shell environment for the SCION SCXML interpreter.. It accepts SCXML events via stdin, and thus can be used to integrate SCXML with Unix shell programming.. It integrates scxml-viz, so can also allow graphical simulation of SCXML models.. scion-web-simulation-.. environment.. : A simple proof-of-concept web sandbox environment for developing SCXML.. Code can be entered on the left, and visualized on the right.. Furthermore, SCION is integrated, so code can be simulated and graphically animated.. A demo can be found here:.. http://goo.. gl/wG5cq.. Tags:.. d3.. ,.. scion.. scxml.. shell.. unix.. visualization.. viz.. Posted in.. Uncategorized.. |.. No Comments.. August 4th, 2012.. Thinkpad W520 Multi-Monitor nVidia Optimus with Bumblebee on Ubuntu 12.. 04.. Last night I decided to upgrade from Ubuntu 11.. 10 to 12.. 04 on my Thinkpad W520.. The main reason for this was that my current setup was making suboptimal use of the hardware, and due to recent advances which I found documented in several blog posts, it seemed I could improve this situation.. The goal of this post is then to document I hoped to achieve, and how I arrive there, so that in the future I ll be able to remember what the heck I did to set this all up.. Project Goals.. I purchased the Thinkpad W520 back in November, because my fanless Mini Inspiron netbook kept overheating when I left it to run performance benchmarks related to my research.. Ubuntu 11.. 10 worked pretty well on the W520 out of the box, but there were two major outstanding compatibility issues.. First, the W520 comes with nVidia Optimus graphics.. In this setup, the laptop has a discrete nVidia card and an on-board Intel graphics card, and the operating system is able to enable and disable the nVidia card in software in order to save power.. nVidia has explicitly stated that they will not support Optimus in Linux, which six months ago meant that there were only two options for Linux users: enable only discrete graphics or integrated graphics in the BIOS.. When the Intel graphics were enabled in the BIOS, the open source Intel integrated graphics drivers worked like a dream 3D acceleration, flawless suspend/resume support, and everything was just a superb, rock-solid experience.. The battery life was also excellent.. For someone like me who mostly uses the laptop to write software and does not care about 3D acceleration, this would have been an ideal choice, except for one major flaw, which is that the external display ports on the W520 (VGA and DisplayPort) hardwired to the nVidia card, so using an external monitor is impossible when only Intel Graphics are enabled in the BIOS.. I use an external monitor at home, and so this meant Intel graphics were a nonstarter for me.. As it was not possible to use Optimus or Intel graphics, this left me with only one choice, which was to use the nVidia graphics.. This process went something like this:.. Tried the nouveau driver.. This worked pretty well, but would hang X on suspend/resume.. Solid suspend/resume support is a must-have, so I eliminate this option.. Tried to install the nVidia binary driver in Ubuntu using the nice graphical interface (jockey-gtk).. Ultimately, this did not work.. Uninstalled the binary driver using jockey-gtk.. Tried to install the nVidia binary driver by running the Linux installer shell script from nVidia.. This felt evil, because you have no idea what the script is doing to your system, but everything installed correctly, and after a reboot, the laptop finally had working graphics.. The binary nVidia drivers were pretty solid: 3D acceleration, multi-monitor support, suspend/resume, and VDPAU video acceleration all worked great.. My laptop had an uptime of several months under this configuration.. Unfortunately, however, battery life was pretty poor, clocking in at about 3 hours.. Furthermore, and more seriously, the laptop firmware.. has a.. bug.. where Linux would hang at boot when both VT-x (Intel hardware vitalization technology) and nVidia graphics were enabled in the BIOS.. This was pretty annoying, as I tend to run Windows in a VM in Virtualbox on Linux for testing compatibility with different versions of Internet Explorer.. I believe this bug is now being tracked by Linux kernel developers, who are working around this issue by disabling X2APIC on boot, but Lenovo has refused to fix this bug, or acknowledge its existence.. Not cool, Lenovo.. This meant that it would not be possible to have both working multi-head support, and reasonable battery life and VT-x support.. Not optimal.. Bumblebee.. Bumblebee is a project to bring support for nVidia Optimus to Linux.. It basically renders a virtual X server on the nVidia card, and then passes the buffer to the Intel card which dumps it to the screen.. Apparently, this is pretty much how Optimus works on Windows as well.. The advantage to using Bumblebee is that, theoretically, you would be able to have the excellent battery life of the Intel graphics, but also have 3D acceleration and multi-monitor support from the nVidia graphics.. I tried bumblebee 6 months ago, but was unable to get it to work.. The project had also been forked around that time, and it wasn t clear which fork to follow.. However, the following blog posts led me to believe that the situation had changed, and a multi-monitor setup could be achieved using Ubuntu 12.. 04 and Bumblebee:.. http://sagark.. org/optimal-ubuntu-graphics-setup-for-thinkpads.. http://zachstechnotes.. blogspot.. com/2012/04/post-title.. html.. http://blog.. gordin.. de/post/optimus-guide.. I decided to see if I could get this to work myself, and ultimately I was successful.. My current setup is now as follows:.. Ubuntu 12.. 04 x64.. Optimus Graphics and VT-x enabled in BIOS.. External monitor, which can be enabled or disabled on-demand, and works consistently after suspend/resume.. Bumblebee set to use automatic power switching, so the nVidia card is disabled when not in use.. Xmonad and Unity2D desktop environment.. The remainder of the blog post documents the process I went through in order to obtain this optimal setup.. Multi-Monitor Support with Optimus and Bumblebee on Ubuntu 12.. I primarily followed the process described on.. Sagar Karandikar s blog.. , up to, but not including, his changes to /etc/bumblebee/bumblebee.. conf.. Sagar says to set the following parameters in /etc/bumblebee/bumblebee.. And then enable the external monitor as follows:.. As far as I understand it, these parameters set in /etc/bumblebee/bumblebee.. conf tell Bumblebee to use the nVidia proprietary driver (.. Driver=nVidia.. ), keep the nVidia card turned on (.. PMMethod=none.. disables bumblebee power management), and perpetually run an X server (.. KeepUnusedXServer=true.. ).. Clearly this setup would have negative implications for battery life, as the nVidia card is kept on and active.. optirun true.. then should turn on the nVidia card and output to the external monitor.. xrandr.. tells the X server where to put the virtual display, and.. screenclone.. clones the X server running on display :8 (the X server being run by Bumblebee on the nVidia card) to the Intel virtual display.. I found that this technique was really finicky.. would enable the external display right after rebooting, but would often not enable the display in other situations, such as after a suspend/resume cycle.. It wasn t clear how to bring the nVidia card back into a good state where it could output to an external monitor.. At this point, I read a comment by Gordin on Sagar s blog, and his.. blog post.. In this post, he describes using bbswitch for power management in bumblebee.. conf, and running the second display using.. optirun screenclone -d :8 -x 1.. This has the advantage of: a) enabling power management on the nVidia card, so it is turned off when not in use, and b) seemingly increased reliability, as the nVidia card will be enabled when.. is run, and disabled when the screenclone process is terminated.. Based on these instructions, I came up with the following adapted solution.. Set the following settings in /etc/bumblebee/bumblebee.. conf:.. The following shell script will enable the external monitor.. ^C will disable the external monitor:.. This setup now works great, although screenclone does behave a bit strangely sometimes.. For example, when switching workspaces, screenclone may require you to click in the workspace on the second desktop before it updates its graphics there.. There were a few other minor quirks I found, but ultimately it seems like a solid and reliable solution.. Desktop Environment: Xmonad and Unity2D.. I like Unity mostly because it provides a good global menu, but I find most other parts of it, including window management and the launcher, to be clunky or not very useful.. Furthermore, certain Ubuntu compiz plugins that would improve the window management, such as the Put plugin, seem to be completely broken on Ubuntu out of the box:.. https://bugs.. launchpad.. net/ubuntu/+source/compiz/+bug/898087.. net/ubuntu/+source/compiz/+bug/291854.. I therefore set up my desktop environment to use the Unity2D panel and the Xmonad window manager.. I primarily followed this guide to set this up:.. http://www.. elonflegenheimer.. com/2012/06/22/xmonad-in-ubuntu-12.. 04-with-unity-2d.. The only change I made was to /usr/bin/gnome-session-xmonad.. I m not sure why, but xmonad was not getting started with the desktop session.. I therefore started it in the background in the /usr/bin/gnome-session-xmonad, along with xcompmgr, a program which provides compositing when running non-compositing window managers like Xmonad.. xcompmgr allows things like notification windows to appear translucent.. For a launcher, I m now trying out.. synapse.. , which can be set to run when the gnome session is started.. graphics.. laptop.. linux.. nvidia.. optimus.. thinkpad.. 3 Comments.. July 29th, 2012.. Syracuse Student Sandbox Hackathon Recap.. Yesterday I participated in a hackathon at the.. Syracuse Student Sandbox.. This blog post is meant to provide a quick recap of the interesting technical contributions that came out of this event.. All source code mentioned in this article is available on.. Github.. What I Did.. My project idea was to develop a voice menu interface to the.. Archive.. org live music archive.. using.. Twilio.. The idea was that you would call a particular phone number, and be presented with a voice menu interface.. There would be options to listen to the Archive.. org top music pick, or to perform a search.. Core Technology.. org.. org exposes a very nice, hacker-friendly API.. It is fairly well-documented.. here.. I only encountered a few gotchas, which are that the API to the main page does not return valid JSON, and so it must be parsed using JavaScript s eval; and, the query API is based on Lucene query syntax, which I did not find documented anywhere.. Developing a Twilio telephony application is just like developing a regular web application.. When you register with Twilio, they assign you a phone number, which you can then point to a web server URL.. When someone calls the number, Twilio performs an performs HTTP request (either GET or POST, depending on how you have it configured) to the server which you specified.. Instead of returning HTML, you return TwiML.. Each tag in a TwiML document is a verb which tells Twilio what to do.. TwiML documents can be modelled as state machines, in that there s a particular flow between elements.. For certain tags, Twilio, will simply flow to the next tag after performing the action associated with that tag; however, for other tags, Twilio will perform a request (again, either GET or POST) to a URL specified by the tag s action attribute, and will execute the TwiML document returned by that request.. This is analogous to submitting a form in HTML.. Each HTTP request performed by Twilio will submit some data, like the caller s phone number and location, as well as a variable which allows the server to track the session.. There were a few instances of undocumented behaviour that I encountered, but overall developing a TwiML application was as easy as it sounds.. After I had my node.. js hosting set up, I had an initial demo working in less than an hour, in which the user could call in, and would be able to hear the archive.. org live music pick.. This was simply a matter of using Archive.. org s API to retrieve the URL to the file of the top live music pick, and passing this URL to Twilio in a.. Play.. element.. Twilio was then able to stream the MP3 file directly from Archive.. Main Technical Contribution: Using SCXML and SCION to Model Navigation in a Node.. js Web Application.. I developed the application using Node.. js and.. SCION.. , an.. SCXML.. /Statecharts interpreter library I ve been working on.. In addition to providing a.. very small module.. for querying the archive.. org API using Node.. js, I feel the main technical contribution of this project was using SCXML to model web n avigation, and I will elaborate on that contribution in this section.. Using Statecharts to model web navigation is not a new idea (see.. StateWebCharts.. , for example), however, I believe this is the first time this technique has been used in conjunction with Node.. js.. From a high level, SCXML can be used to describe the possible flows between pages in a Web application.. SCXML allows one to model these flows explicitly, so that every possible session state and the transitions between session states are well-defined.. Another way to describe this is that SCXML can be used to implement.. routing.. which changes depending on.. session state.. A web server accepts an HTTP request as input and asynchronously returns an HTTP response as output.. Each HTTP request can contain parameters, encoded as query parameters on the URL in the case of a GET request, or as POST data for a POST request.. These parameters can contain data that allows the server to map the HTTP request to a particular session, as well as other data submitted by the user.. These inputs to the web server were mapped to SCXML in the following way.. First, an SCXML session was created for each HTTP session, such that subsequent HTTP requests would be dispatched to this one SCXML session, and this SCXML session would maintain all of the session state.. Each HTTP request was turned into an SCXML event and dispatched as input to the SCXML session corresponding to the session of that HTTP request.. An SCXML event has name and data properties.. The url of the request was used as the event name, and the parsed query parameters were used as the event data.. Furthermore, the Node.. js HTTP.. request.. and.. response.. objects were also included as event data.. In this implementation, SCXML states were mapped to individual web pages, which were returned to the user on the HTTP response.. The SCXML document modelling navigation can be found.. Here is a graphical rendering of it (automatically generated using.. scxmlgui.. ):.. Statecharts Diagram.. ?xml version= 1.. 0 encoding= UTF-8 ? scxml xmlns= http://www.. w3.. org/2005/07/scxml version= 1.. 0 profile= ecmascript datamodel data id= serverUrl expr= 'http://jacobbeard.. net:1337' / data id= api / /datamodel script src=.. /playPick.. js / script src=.. /performSearch.. js / state id= initial_default transition event= init target= waiting_for_initial_request assign location= api expr= _event.. data / /transition /state state id= waiting_for_initial_request transition target= root_menu event= / / /state state id= root_menu onentry log label= entering root_menu expr= _events / !-- we want to send this as a response.. hack SCION so we can do that somehow -- Response Gather numDigits= 1 action= number_received method= GET Say Root Menu /Say Say Press 1 to listen to the archive dot org live music pick.. Press 2 to search the archive dot org live music archive.. /Say /Gather /Response /onentry transition target= playing_pick event= /number_received cond= _event.. data.. params.. Digits === '1' / transition target= searching event= /number_received cond= _event.. Digits === '2' / !-- anything else - catchall error condition -- transition target= root_menu event= * Response Gather numDigits= 1 action= number_received method= GET Say I did not understand your response.. /Say Say Press 1 to listen to the archive dot org live music pick.. /Say /Gather /Response /transition /state state id= playing_pick !-- TODO: move the logic in playPack into SCXML -- onentry log label= entering playing_pick / script playPick(_event.. response,api); /script /onentry !-- whatever we do, just return -- transition target= root_menu event= * / /state state id= searching datamodel data id= searchNumber / data id= searchTerm / /datamodel onentry log label= entering searching / Response Gather numDigits= 1 action= number_received finishOnKey= * method= GET Say Press 1 to search for an artist.. Press 2 to search for a title.. /Say /Gather Redirect method= GET / /Redirect /Response /onentry transition target= receiving_search_input event= /number_received cond= _event.. Digits === '1' || _event.. Digits === '2' assign location= searchNumber expr= _event.. Digits / /transition transition target= root_menu event= / / transition target= bad_search_number event= * / /state state id= receiving_search_input onentry Response Gather numDigits= 3 action= number_received method= GET Say Press the first three digits of the name to search for.. /Say /Gather Redirect method= GET / /Redirect /Response /onentry transition target= performing_search event= /number_received cond= _event.. Digits assign location= searchTerm expr= _event.. Digits / /transition transition target= bad_search_number event= /number_received / transition target= root_menu event= * / /state state id= performing_search onentry script performSearch(searchNumber,searchTerm,_event.. response,api); /script /onentry transition target= searching event= /search-complete / transition target= searching event= /artist-not-found / transition target= root_menu event= * / /state state id= bad_search_number onentry Response Say I didn't understand the number you entered.. /Say Redirect method= GET / /Redirect /Response /onentry transition target= searching event= / / /state /scxml.. Note that the transition conditions do not appear in the above diagram, so I would recommend reading the SCXML document as well as the diagram.. In this model, the statechart starts in an.. initial_default.. state in which it waits for an.. init.. event.. The.. event is used to pass platform-specific API s into the state machine.. After receiving the.. event, the statechart will transition to state.. waiting_for_initial_request.. , where it will wait for an initial request to url /.. After receiving this request, it will transition to state.. root_menu.. Of particular interest here are the actions in the.. onentry.. tag.. The TwiML document to be returned to the user is inlined directly as a a custom action within.. onenter.. , and is executed by the interpreter by writing that document to the node.. js.. object s output stream.. This document will tell Twilio to wait for the user to press a single digit, and to submit a GET request to URL /number_received when the request completes.. There are two transitions originating from.. The first targets state.. play_pick.. , the second targets state.. searching.. , and the third loops back to state.. The first two transitions have a.. cond.. attribute, which is used to inspect the data sent with the request.. So, for example, if the user presses 1 , Twilio would submit a GET request to URL /number_received?Digits=1 (along with other URL parameters, which I have omitted for simplicity).. This would be transformed into the SCXML event.. {name : '/number_received', data : { Digits : '1' }}.. , which would then activate the transition to.. playing_picks.. The system would then transition to.. , which would call a JavaScript function that would query the Archive.. org API to retrieve the URL to Archive.. org s top song pick, and would output a TwiML document on the HTTP response object which would contain the URL to that song.. If the user pressed a 2 instead of a 1 , then the.. attribute would cause the statechart to activate the transition to state.. instead of.. playing_pick.. If the user pressed anything else, or attempted to navigate to any other URL, then the wildcard * event on the third transition would simply cause the statechart to loop back to.. The rest of the application is implemented in a similar fashion.. Comments and Critiques.. While overall, I feel this effort was successful, and demonstrates a technique that could be used to develop larger and more complex applications, there are ways I would like to improve it.. First, while I feel that being able to inline the response as custom action code in the entry action of a state is a rather elegant approach, it would be useful to make the inline XML templated so that it can use data from the system s datamodel.. Second, there s a disconnect between the action specified in the returned document (the url to which the document will be submitted), and the transitions originating from  ...   language (as opposed to a build Domain Specific Language, like Ant), and at the same time provide facilities for define build targets with dependencies, and topographically sorting them when they are invoked.. Ant still has many advantages, however, including great support in existing continuous integration systems.. The approach I have described seems to marry the advantages of using Ant, with those of using your preferred scripting language.. Integrating Ant with Maven.. At this point, I had brought over most of the existing functionality from the Rhino build script into Ant, and I was beginning to look at ways to then hook into Maven.. While I had some previous experience working with Ant, I had never before worked with Maven, and so there was a learning curve.. The goal was to hook into the existing Apache Commons Maven-based build infrastructure, while at the same time trying to reuse existing code.. While this part was non-trivial to develop, it is actually the least interesting part of the process to me, and I think the least relevant to this blog (it doesn t have much to do with JavaScript or Open Web technologies), so I m only going to briefly describe it.. The build architecture is currently as follows:.. I felt it was important to maintain both an Ant front-end and Maven front-end to the build, as each has advantages for certain tasks.. Common functionality is imported from build-common.. xml.. Both the Maven (pom.. xml) and Ant (build.. xml) front-ends delegate to mvn-ant-build.. xml, which contains most of the core tasks without the dependencies between targets.. Based on my experience on the Maven mailing list, if you are a Maven person (a person who has drunk the Maven kool-aid not my words, Maven people seem to like to use this phrase), then this architecture built around delegation to Ant will likely make you cry.. It will seem needlessly complex, when the alternative of creating a set of custom Maven plugins will seem much better.. This might be the case, and I proposed investigating this options.. The problem, however, seems to be that relying on custom Maven plugins for building is a no-go for Commons projects (with the exception of the Maven Commons plugin), as it is uncertain where these plugins will be hosted.. However, building a Maven plugin for process of compiling JavaScript using the RequireJS framework to Java bytecode, as outlined above, is I think something that has value, and which I would like to pursue at some point.. Future Work.. I still have not put scxml-js forward for a vote, and even though the refactoring of the build system is more or less complete, I still may not do so.. I have just arrived in Belgium where I will be working on my Master s thesis for three months, and so I may need to deprioritize my work on scxml-js while I prioritize researching the theoretical aspects of my thesis.. Also, now that SVG Open has passed, there seems to be less incentive to publish an alpha release.. It may be better to give scxml-js more time to mature, and then release later on.. ant.. build.. javascript.. maven.. rhino.. August 16th, 2010.. Google Summer of Code 2010, Final Update.. The pencils down date for Google Summer of Code 2010 is right now.. Here s a quick overview of what I feel I have contributed to scxml-js thus far, and what I feel should be done in the future.. Tests and Testing Framework.. Critical to the development of scxml-js was the creation of a robust testing framework.. scxml-js was written using a tests-first development style, which is to say that before adding any new feature, I would attempt to map out the implications of that feature, including all possible edge cases, and would then write tests for sucess, failure, and sanity.. By automating these tests, it was possible to avoid regressions when new features were added, and thus maintain robustness as the codebase became more complex.. Testing scxml-js was an interesting challenges with respect to automated testing, as it was necessary to test both the generated target code (using ahead-of-time compilation), and the compiler itself (using just-in-time compilation), running in all the major web browsers, as well as on the JVM under Rhino.. This represented many usage contexts, and so a great deal of complexity was bundled into the resulting build script.. The tests written usually conformed to a general format: a given SCXML input file would be compiled and instantiated, and a script would send events into the compiled statechart while asserting that the state had updated correctly.. A custom build script, written in JavaScript, automated the process of compiling and running test cases, starting and stopping web browsers, and harvesting results.. dojo.. doh and Selenium RC were used in the testing framework.. Going Forward.. It would be useful to phase out the custom JavaScript build script for a more standard build tool, such as maven or ant.. This may be challenging, however, given the number of usage contexts of the scxml-js compiler, as well as the fact that the API it exposes is asynchronous.. Another task I d like to perform is to take the tests written for Commons SCXML and port them so that they can be used in scxml-js.. Finally, I have often noticed strange behaviour with Selenium.. At this moment, when run under Selenium, tests are broken for in-browser compilation under Internet Explorer; however when run manually, they always pass.. I ve traced where the tests are failing, and it s a strange and.. intermittent.. failure involving parsing an XML document.. I it think may be caused by the way that Selenium instruments the code in the page.. I feel it may be worthwhile to investigate alternatives to Selenium.. scxml-js Compiler.. This page.. provides an overview of what features works right now, and what do not.. In general, I think scxml-js is probably stable enough to use in many contexts.. Unfortunately, scxml-js has had only one user, and that has been me.. I m certain that when other developers do begin using it, they will break it and find lots of bugs.. I m hoping to prepare a pre-alpha release to coincide with the SVG Open 2010 conference at the end of the month, and in preparation for this, I m reaching out to people I know to ask them to attempt to use scxml-js in a non-trivial project.. This will help me find bugs before I attempt to release scxml-js for general consumption.. There are still edge cases which I have in mind that need to be tested.. For example, I haven t done much testing of nested parallel states.. I also have further performance optimizations which I d like to implement.. For example, I ve been using JavaScript 1.. 6 functional Array prototype extensions (e.. map.. filter.. , and.. forEach.. ) in the generated code, and augmenting.. Array.. for compatibility with Internet Explorer.. However, these methods are.. often slower than using a regular for loop.. , especially in IE, and so it would be good to swap them out for regular.. for.. loops in the target code.. Another performance enhancement would be to encode the statechart s current configuration as a single scalar state variable, rather than encoding it as an array of basic state variables, for statecharts that do not contain parallel states.. This would reduce the time required to dispatch events for these types of statecharts, as the statechart instance would no longer need to iterate through each state of the current configuration, thus removing the overhead of the for loop.. I m sure that once outside developers begin to look at the code, they will have lots of ideas on how to improve performance as well.. There are other interesting parts of the project that still need to be investigated, including exploring the best way to integrate scxml-js with existing JavaScript toolkits, such as jQuery UI and Dojo.. Graph Layout, Visualization, and Listener API.. As I stated in the initial project proposal, one of my goal for GSoC was to create a tool that would take an SCXML document, and generate a graphical representation of that document.. By targeting SVG, this graphical representation could then be scripted.. By attaching a listener to a statechart instance, the SVG document could then be animated in response to state changes.. I was able to accomplish this by porting several graph layout algorithms written by.. Denis Dube.. for his Master s thesis at the McGill University Modelling, Simulation and Design Lab.. Denis was kind enough to license his implementations for release in ASF projects under the Apache License.. You can see a demo of some of this work.. The intention behind this work was to create a tool that would facilitate graphical debugging of statecharts in the web browser.. While this is currently possible, it still requires glue code to be manually written to generate a graphical representation from an SCXML document, and then hook up the listener.. I would like to make this process easier and more automatic.. I feel it should operate similarly to other compilers, in that the compiler should optionally include debugging symbols in the generated code which allow it to map to a concrete syntax (textual or graphical) representation.. Another issue that needs to be resolved is cross-browser compatibility.. It s currently possible to generate SVG in Firefox and Batik, but there are known issues in Chromium and Opera.. Also, there are several more graph layout algorithms implemented by Denis which I have not yet ported.. I d really like to see this happen.. Finally, my initial inquiries on the svg-developers mailing list indicated that this work would be useful for other projects.. I therefore feel that these JavaScript graph layout implementations should be moved into a portable library.. Also, rather than generating a graphical representation directly from SCXML, it should be possible to generate a graphical representation from a more neutral markup format for describing graphs, such as GraphML.. Demos.. I have written some nice demos that illustrate the various aspects of scxml-js, including how it may be used in the development of rich, Web-based user interfaces.. The most interesting and complex examples are the.. Drawing Tool Demos.. , which implement a subset of.. Inkscape s.. UI behaviour.. first demo.. uses scxml-js with a just-in-time compilation technique; the.. second.. uses ahead-of-time compilation; and the.. third.. uses just-in-time compilation, and generates a graphical representation on the fly, which it then animates in response to UI events.. This last demo only works well in Firefox right now, but shows what should be possible going forward.. I have several other ideas for demos, which I will attempt implement before the SVG Open conference.. Documentation.. The main sources of documentation now are the.. User Guide.. , the source code for the demos, and.. Section 5.. of my SVG Open paper submission on scxml-js.. This has been an exciting and engaging project to work on, and I m extremely grateful to Google, the Apache Software Foundation, and my mentor Rahul for facilitating this experience.. gsoc.. June 28th, 2010.. Google Summer of Code, Update 3: More Live Demos.. Just a quick update this time.. The scxml-js is moving right along, as I ve been adding support for new features at, on average, a rate of about a feature per day.. Today, I reached an interesting milestone, which is that scxml-js is now as featurful as the old SCCJS compiler which I had previously been using in my research.. This means that I can now begin porting the demos and prototypes I constructed using SCCJS to scxml-js, as well as begin creating new ones.. New Demos.. Here are two new, simple demos that illustrate how scxml-js may be used to to describe and implement behaviour of web User Interfaces (tested in recent Firefox and Chromium; will definitely not work in IE due to its use of XHTML):.. Drag-and-Drop Behaviour 1.. Drag-and-Drop Behaviour 2.. Both examples use state machines to describe and implement drag-and-drop behaviour of SVG elements.. The first example is interesting, because it illustrates how HTML, SVG, and SCXML can be used together in a single compound document to declaratively describe UI structure and behaviour.. The second example illustrates how one may create state machines and DOM elements dynamically and procedurally using JavaScript, as opposed to declaratively using XML markup.. In this example, each dynamically-created element will have its own state machine, hence its own state.. I think the code in these examples is fairly clean and instructive, and should give a good sense regarding how scxml-js may ultimately be used as a finished product.. google summer of code.. June 23rd, 2010.. Google Summer of Code 2010, Project Update 2.. Here s another quick update on the status of my Google Summer of Code project.. Finished porting IR-compiler and Code Generation Components to XSLT.. As described in the previous post, I finished porting the IR-compiler and Code Generation components from E4X to XSLT.. Once I had this working with the Java XML transformation APIs under Rhino, I followed up with the completion of two related subtasks:.. Get the XSL transformations working in-browser, and across all major browsers (IE8, Firefox 3.. 5, Safari 5, Chrome 5 — Opera still to come).. Create a single consolidated compiler front-end, written in JavaScript, that works in both the browser and in Rhino.. Cross-Browser XSL Transformation.. Getting all XSL transformations to work reliably across browsers was something I expressed serious concerns about in my previous post.. Indeed, this task posed some interesting challenges, and motivated certain design decisions.. The main issue I encountered in getting these XSL transformations to work was that support for.. xsl:import.. in xsl stylesheets, when called from JavaScript, is not very good in most browsers.. works well in Firefox, but is currently distinctly broken in Webkit and Webkit-based browsers (see.. for the Chrome bug report, and.. for the Webkit bug report).. I also had limited success with it in IE 8.. I considered several possible solutions to work around this bug.. First, I looked into a pure JavaScript solution.. In my previous post, I linked to the Sarissa and AJAXSLT libraries.. In general, a common task of JavaScript libraries is to abstract out browser differences, so the fact that several libraries existed which appeared to do just that for XSLT offered me a degree of confidence when I was initially choosing XSLT as a primary technology with which to implement scxml-js.. Unfortunately, in this development cycle, on closer inspection, I found that Sarissa, AJAXSLT, and all other libraries designed to abstract out cross-browser XSLT differences (including Javeline, the jquery xsl transform plugin), are not actively maintained.. As web browsers are rapidly moving targets, maintenance is a major concern when selecting a library dependency.. In any case, a pure JavaScript solution did not appear feasible.. This left me to get the XSL transformations working using just the bare metal of the browser.. My next attempt was to try to use some clever DOM manipulation to work around the Webkit bug.. In the Webkit bug,.. does not work because frameless resources cannot load other resources.. This meant that loading the SCXML document on its own in Chrome, with an.. xml-stylesheet.. processing instruction pointing to the code generation stylesheet, did generate code correctly.. My idea, then, was to use DOM to create an invisible iframe, and load into it the SCXML document to transform, along with the requisite processing instruction, and read out the transformed JavaScript.. I actually had some success with this, but it seemed to be a brittle solution.. I was able to get it to work, but not reliably, and it was difficult to know when and how to read the transformed JavaScript out of the iframe.. In any case my attempts at this can be found in this branch.. My final, and ultimately successful attempt was to use XSL to preprocess the stylesheets that used.. , so as to combine the stylesheet contents, while still respecting the semantics of.. This was not too difficult, and only took a bit of effort to debug.. You can see the results.. Note that there may be some corner cases of XSLT that are not handled by this script, but it works well for the existing scxml-js code generation backends.. This is the solution upon which I ultimately settled.. One thing that must still be done, given this solution, is to incorporate this stylesheet preprocessing into the build step.. For the moment, I have simply done the simple and dirty thing, which is to checked the preprocessed stylesheets into SVN.. It s interesting to note that IE 8 was the easiest browser to work with in this cycle, as it provided useful and meaningful error messages when XSL transformations failed.. By contrast, Firefox would return a cryptic error messages, without much useful information, and Safari/Chrome would not provide any error message at all, instead failing silently in the XSLT processor and returning undefined.. Consolidated Compiler Front-end.. As I described in my previous post, a thin front-end to the XSL stylesheets was needed.. For the purposes of running inside of the browser, the front-end would need to be written in JavaScript.. It would have been possible, however, to write a separate front-end in a different language (bash, Java, or anything else), for the purposes of running outside of the browser.. A design decision needed to be made, then, regarding how the front-end should be implemented:.. Implement one unified front-end, written in JavaScript, which relies on modules which provide portable API s, and provide implementations of these API s that vary between environments.. Implement multiple front-ends, for browser and server environments.. I decided that, with respect to maintainability, it would be easier to maintain one front-end, written in one language, rather than two front-ends in different languages, and so I chose the first option.. This worked well, but I m not yet completely happy with the result, as I have code for Rhino and code for the browser mixed together in the same mdoule.. This means that code for Rhino is downloaded to the browser, even though it is never called (see.. Transformer.. for an example of this).. The same is true for code that targets IE versus other browsers.. I believe I ve thought of a way to use RequireJS to selectively download platform-specific modules, and this is an optimization that I ll make in the near future.. In-Browser Demo.. The result of this work can be seen in this demo site I threw together:.. http://live.. echo-flow.. com/scxml-js/demo/sandbox/sandbox.. This demo provides a very crude illustration of what a browser-based Graphical User Interface to the compiler might look like.. It takes SCXML as input (top-most textarea), compiles it to JavaScript code (lower-left textarea, read-only), and then allows simulation from the console (bottom-right textarea and text input).. For convenience, the demo populates the SCXML input textarea with the KitchenSink executable content example.. I ve tested it in IE8, Safari 5, Chrome 5, Firefox 3.. 5.. It works best in Chrome and Firefox.. I haven t been testing in Opera, but I m going to start soon.. The past three weeks was spent porting and refactoring, which was necessary to facilitate future progress, and now there s lots to do going forward.. My feeling is that it s now time to get back to the main work, which is adding important features to the compiler, starting with functionality still missing from the current implementation of the core module:.. https://issues.. apache.. org/jira/browse/SCXML-137.. I m going to be presenting this work at the.. SVG Open 2010.. conference at the end of August, so I m also keen to prepare some new, compelling demos that will really illustrate the power of Statecharts on the web.. Older Entries.. Pages.. About.. Code Snippets.. OSS Projects.. pySCION.. SCION-Java.. scion-web-simulation-environment.. NET.. SCXML Interpretation and Optimization Engine (SCION).. SCXML Test Framework.. Social Networks.. coderwall.. LinkedIn.. ohloh.. Archives.. October 2012.. (1).. August 2012.. July 2012.. November 2011.. June 2011.. October 2010.. September 2010.. August 2010.. June 2010.. (3).. April 2010.. January 2010.. (2).. December 2009.. November 2009.. October 2009.. August 2009.. July 2009.. (4).. May 2009.. (10).. April 2009.. January 2009.. Search.. Stack Overflow.. Coderwall.. Twitter: jbeard4.. RT.. @StephenAtHome.. : If you're doing nothing wrong, you have nothing to hide from the giant surveillance apparatus the government's been hidi….. about 1 year ago.. @eranhammer.. easier than rails.. in reply to eranhammer.. CSA starts tomorrow!.. @DanielEllsberg.. : There has not been in American history a more important leak than Snowden's.. http://t.. co/5BbvHk8r56.. @jbeard4.. This work is licensed under.. GPL.. - 2009 | Powered by.. Wordpress.. using the theme.. aav1..

    Original link path: /
    Open archive

  • Title: Yet Another JavaScript Blog » New projects: scxml-viz, scion-shell, and scion-web-simulation-environment
    Descriptive info: Comments (0)..

    Original link path: /2012/10/07/new-projects-scxml-viz-scion-shell-and-scion-web-simulation-environment/
    Open archive

  • Title: Yet Another JavaScript Blog » d3
    Original link path: /tag/d3/
    (No additional info available in detailed archive for this subpage)

  • Title: Yet Another JavaScript Blog » scion
    Descriptive info: Comments (1)..

    Original link path: /tag/scion/
    Open archive

  • Title: Yet Another JavaScript Blog » scxml
    Descriptive info: June 6th, 2010.. Google Summer of Code 2010, Project Update 1.. I m two weeks into my Google Summer of Code project, and decided it was time to write the first update describing the work I ve done, and the work I will do.. Project Overview.. First a quick overview of what my project is, what it does, why one might care about it.. The SCXML Code Generation Framework, JavaScript Edition project (SCXMLcgf/js) centers on the development of a particular tool, the purpose of which is to accelerate the development of rich Web-based User Interfaces.. The idea behind it is that there is a modelling language, called Statecharts, which is very good at describing dynamic behaviour of objects, and can be used for describing rich UI behaviour as well.. The tool I m developing, then, is a Statechart-to-JavaScript compiler, which takes as input Statechart models as SCXML documents, and compiles them to executable JavaScript code, which can then be used in the development of complex Web UIs.. I m currently developing this tool under the auspices of the Apache Foundation during this year s Google Summer of Code.. For more information on it, you could read my GSoC project proposal.. , or even check out the code.. Week 1 Overview.. As I said above, I m now two weeks into the project.. I had already done some work on this last semester, so I ve been adding in support for additional modules described in the.. SCXML specification.. In Week 1, I added basic support for the Script Module.. I wrote some tests for this, and it seemed to work well, so I checked it in.. Difficulties with E4X.. I had originally written SCXMLcgf/js entirely JavaScript, targeting the.. Mozilla Rhino.. JavaScript implementation.. One feature that Rhino offers is the.. E4X.. language extension to JavaScript.. E4X was fantastic for rapidly developing my project.. It was particularly useful over standard JavaScript in terms of providing an elegant syntax for: templating (multiline strings with embedded parameters, and regular JavaScript scoping rules), queries against the XML document structure (very similar to XPath), and easy manipulation of that structure.. These language features allowed me to write my compiler in a very declarative style: I would execute transformations on the input SCXML document, then query the resulting structure and and pass it into templates which generated code in a top-down fashion.. I leveraged E4X s language features heavily throughout my project, and was very productive.. Unfortunately, during Week 1, I ran into some difficulties with E4X.. There was some weirdness involving namespaces, and some involving scoping.. This wasn t entirely surprising, as the Rhino implementation of E4X has not always felt very robust to me.. Right out of the box, there is a.. that prevents one from parsing XML files with XML declarations, and I have encountered other problems as well.. In any case, I lost an afternoon to this problem, and decided that I needed to begin to remove SCXMLcgf/js s E4X dependencies sooner rather than later.. I had known that it would eventually be necessary to move away from E4X for portability reasons, as it would be desirable to be able to run the SCXMLcgf/js in the browser environment, including non-Mozilla browsers.. There are a number of reasons for this, including the possibility of using the compiler as a JIT compiler, and the possibility of providing a browser-based environment for Statechart development.. Given the problems I had had with E4X in Week 1, I decided to move this task up in my schedule, and deal with it immediately.. So, for Week 2, I ve been porting most of my code to XSLT.. Justification for Targeting XSLT.. At the beginning of Week 2, I knew I needed to migrate away from E4X, but it wasn t clear what the replacement technology should be.. So, I spent a lot of time thinking about SCXMLcgf/js, its architecture, and the requirements that this imposes on the technology.. The architecture of SCXMLcgf/js can be broken into three main components:.. Front End: Takes in arguments, possibly passed in from the command-line, and passes these in as options to the IR Compiler and the Code Generator.. IR Compiler: Analyzes the given SCXML document, and creates an Intermediate Representation (IR) that is easy to generate code from.. Code Generator: Generates code from a given SCXML IR.. May have multiple backend modules that target different programming languages (it currently only targets JavaScript), and different Statechart implementation techniques (it currently targets three different techniques).. My goal for Week 2 was just to eliminate E4X dependencies in the Code Generator component.. The idea behind this component is that its modules should only be used for templating.. The primary goal of these template modules is that they  ...   it didn t take long before I was able to be productive with it.. Text node children of an xsl:template/ are echoed out.. This is well-formed XML, but I m not sure if it s strictly legal XSLT.. Anyhow, it works well, and looks good.. This was pretty bad.. The best graphical debugger I found was:.. KXSLdbg.. for KDE 3.. I also tried the XSLT debugger for Eclipse Web Tools, and found it to be really lacking.. In the end, though, I mostly just used xsl:message/ nodes as printfs in development, which was really slow and awkward.. This part of XSLT development could definitely use some improvement.. I ll talk more about 5.. in a second.. XSLT Port of Code Generator and IR-Compiler Components.. I started to work on the XSLT port of the Code Generator component last Saturday, and had it completed by Tuesday or Wednesday.. This actually turned out not to be very difficult, as I had already written my E4X templates in a very XSLT-like style: top-down, primarily using recursion and iteration.. There was some procedural logic in there which need to be broken out, so there was some refactoring to do, but this wasn t too difficult.. When hooking everything up, though, I found another problem with E4X, which was that putting the Xalan XSLT library on the classpath caused E4X s XML serialization to stop working correctly.. Specifically, namespaced attributes would no longer be serialized correctly.. This was something I used often when creating the IR, so it became evident that it would be necessary to port the IR Compiler component in this development cycle as well.. Again, I had to weigh my technology choices.. This component involved some analysis, and transformation of the given SCXML document to include this extra information.. For example, for every transition, the Least Common Ancestor state is computed, as well as the states exited and the states entered for that transition.. I was doubtful that XSLT would be able to do this work, or that I would have sufficient skill in order to program it, so I initially began porting this component to just use DOM for transformation, and XPath for querying.. However, this quickly proved not to not be a productive approach, and I decided to try to use XSLT instead.. I don t have too much to say about this, except to observe that, even though development was often painful due to the lack of a good graphical debugger, it was ultimately successful, and the resulting code doesn t look too bad.. In most cases, I think it s quite readable and elegant, and I think it will not be difficult to maintain.. Updating the Front End.. The last thing I needed to do, then, was update the Front End to match these changes.. At this point, I was in the interesting situation of having all of my business logic implemented in XSLT.. I really enjoyed the idea of having a.. very.. thin front-end, so something like:.. xsltproc xslt/normalizeInitialStates.. xsl $1 | \ xsltproc xslt/generateUniqueStateIds.. xsl - | \ xsltproc xslt/splitTransitionTargets.. xsl - | \ xsltproc xslt/changeTransitionsPointingToCompoundStatesToPointToInitialStates.. xsl - | \ xsltproc xslt/computeLCA.. xsl - | \ xsltproc xslt/transformIf.. xsl - | \ xsltproc xslt/appendStateInformation.. xsl - | \ xsltproc xslt/appendBasicStateInformation.. xsl - | \ xsltproc xslt/appendTransitionInformation.. xsl - | \ xsltproc xslt/StatePatternStatechartGenerator.. xsl | \ xmlindent out.. There would be a bit more to it than that, as there would need to be some logic for command-line parsing, but this would also mostly eliminate the Rhino dependency in my project (.. mostly.. because the code still uses js_beautify as a JavaScript code beautifier, and the build and performance analysis systems are still written in JavaScript).. This approach also makes it very clear where the main programming logic is now located.. In the interest of saving time, however, I decided to continue to use Rhino for the front end, and use SAX Java API s for processing the XSLT transformations.. I m not terribly happy with these API s, and I think Rhino may be making the system perceptibly slower, so I ll probably move to the thin front end at some point.. But right now this approach works, passes all unit tests, and so I m fairly happy with it.. I m not planning to check this work into the Apache SVN repository until I finish porting the other backends, clean things up, and re-figure out the project structure.. I ve been using git and git-svn for version control, though, which has been useful and interesting (this may be the subject of another blog post).. After that, I ll be back onto the regular schedule of implementing modules described in the SCXML specification.. 2 Comments.. Comments (2)..

    Original link path: /tag/scxml/
    Open archive

  • Title: Yet Another JavaScript Blog » shell
    Original link path: /tag/shell/
    (No additional info available in detailed archive for this subpage)

  • Title: Yet Another JavaScript Blog » unix
    Original link path: /tag/unix/
    (No additional info available in detailed archive for this subpage)

  • Title: Yet Another JavaScript Blog » visualization
    Original link path: /tag/visualization/
    (No additional info available in detailed archive for this subpage)

  • Title: Yet Another JavaScript Blog » viz
    Original link path: /tag/viz/
    (No additional info available in detailed archive for this subpage)

  • Title: Yet Another JavaScript Blog » Uncategorized
    Descriptive info: April 29th, 2010.. Update.. Courses have finished, and I ve been accepted into Google Summer of Code 2010.. Lots interesting to come.. January 31st, 2010.. DocBook Customization From a User s Perspective.. Today I was working on a project proposal for my course on Software Architecture.. There was a strict limit of 5 pages on the document, including diagrams, and so it was necessary to be creative in how we formatted the document, to fit in the maximum possible content.. I think that DocBook XSL, together with Apache FOP, generates really great-looking documents out of the box.. Unfortunately, however, it does tend to devote quite a lot of space to the formatting, so today I learned a few tips for styling DocBook documents.. These techniques turned out to be non-trivial to discover, so I thought I d share them with others.. Background Information.. Some customization can be done very simply, by passing a parameter at build-time to your xslt processor.. For many of these customizations, however, DocBook does not insulate the user from XSLT.. Specifically, it is necessary to implement what DocBook XSL refers to as a.. customization layer.. This technique is actually fairly simple, once you know about it.. In short, when compiling your DocBook document, to, for example, html or fo, you would normally point your xslt processor to html/docbook.. xsl or fo/docbook.. xsl in your DocBook XSL directory.. To allow for some customizations, however, you need a way to inject your own logic, and to do this, you create a new xsl document (e.. custom-docbook-fo.. xsl), which.. imports.. the docbook.. xsl stylesheet you would have originally imported.. By creating your own xsl document, you re able to your inject customization logic, hence, this document is called a customization layer.. This is not difficult in practice, but, as I said, it does not insulate the user from XSLT, which for me, was a bit shocking, as I m not used to seeing and working with XSLT.. Easy Customizations.. Two customizations I wanted to do were:.. Remove the Table of Contents.. Resize the body text.. Both of these customizations require the user to simply add a parameter when calling their XSLT processor.. In ant, this looks like the following:.. xslt style= custom-fo.. xsl extension=.. fo basedir= src destdir= ${doc.. dir} includes= *.. xml classpath refid= xalan.. classpath / param name= body.. font.. master expression= 11 / param name= generate.. toc expression= article/appendix nop / /xslt.. The above params remove the Table of Contents, and set the body font to 11pt.. Additionally, all other heading sizes are computed in terms of the body.. master property, so they will all be resized when this property is set.. That s pretty much all there is to it.. Harder Customizations.. Two other customizations I wanted to do were:.. Reduce the size of section titles.. Remove the indent on paragraph text.. To do this, I had to create a customization layer document in the manner I described above.. It looks like the following:.. ?xml version='1.. 0'? xsl:stylesheet xmlns:xsl= http://www.. org/1999/XSL/Transform version= 1.. 0 xsl:import href= docbook-xsl/docbook-xsl-1.. 75.. 2/fo/docbook.. xsl / !-- set sect1 and sect2 title text size-- xsl:attribute-set name= section.. title.. level1.. properties xsl:attribute name= font-size xsl:value-of select= $body.. master * 1.. 3 / xsl:text pt /xsl:text /xsl:attribute /xsl:attribute-set xsl:attribute-set name= section.. level2.. 1 / xsl:text pt /xsl:text /xsl:attribute /xsl:attribute-set !-- remove the indent on para text -- xsl:param name= body.. start.. indent xsl:choose xsl:when test= $fop.. extensions != 0 0pt /xsl:when xsl:when test= $passivetex.. extensions != 0 0pt /xsl:when xsl:otherwise 0pc /xsl:otherwise /xsl:choose /xsl:param /xsl:stylesheet.. Note that the content of the above document was mostly copy-pasted from various sections of.. Part 3.. of DocBook XSL: The Complete Guide.. All I had to do was guess at what it was doing, and substitute my desired values; I wouldn t have been able to program this myself.. A very useful resource for these sorts of customizations is.. FO Parameter Reference.. Customizations You Need a Degree in Computer Science to Understand.. One of the first customizations I wanted to make was to reduce the font sizes used in the title of the document.. Even with.. detailed instructions.. , it took me about two hours to figure out how to do this, just because the method of accomplishing this task was so unexpected.. In general, what you re doing is the following:.. Copying a template that describes how to customize the title.. Customizing that template with things like the Font size.. Using an XSL stylesheet to compile that customized copy to an XSL stylesheet.. Yes, you using an XSL stylesheet to create an XSL stylesheet.. Include the compiled XSL stylesheet in your customization layer.. Optionally, automate this task by making it a part of your build process.. Holy smokes! Let s run through a concrete example of this.. First, make a copy of fo/titlepage.. templates.. I put it in the root of my project and called it mytitlepage.. spec.. I then messed with the entities in mytitlepage.. xml to change the title font size.. This was pretty self-explanatory.. I then skipped a few steps, and integrated it with my ant script.. target name= build-title-page xslt style= ${docbook.. xsl.. dir}/template/titlepage.. xsl basedir=.. destdir=.. includes= mytitlepage.. classpath / /xslt /target.. And made my build-fo task depend on this new task:.. target name= build-fo depends= depends,build-title-page description= Generates HTML files from DocBook XML.. /target.. Now, whenever I build-fo, mytitlepage.. xml will be processed by template/titlepage.. xsl in my DocBook XSL directory, producing the document mytitlepage.. I then import mytitlepage.. xsl into my customization layer:.. xsl / xsl:import href= mytitlepage.. xsl /.. /xsl:stylesheet.. And that s it.. It s really not that difficult once you know how to do it, and you only have to wire it all together once, but it took a long time to see how all of the pieces fit together.. My advisor knows all sorts of tricks for Latex in order to, among other things, compress documents down to sizes to get them into conferences with strict page limits.. I think this is pretty standard practice.. You can do the same thing with DocBook, but expect a high learning curve, especially if you ve never seen XSLT or are unfamiliar with build systems.. I think DocBook is pretty consistent in this respect.. But, in all fairness, I was ultimately successful: all of the resources were there to allow me to figure this out myself.. customization.. docbook.. font.. layer.. size.. software architecture.. styling.. xsl.. xslt.. January 28th, 2010.. docbook-ant-quickstart-project.. I was forced to learn Docbook for the SVG Open 2009 conference, which asks all of their users to submit in Docbook format.. I found it cumbersome and confusing to set up, but, once I had put in place all of my tool support, I actually found it to be a very productive format for authoring structured documents.. Similar in concept to Latex, I now prefer to use Docbook for all of my technical writing.. I like it because it s XML (this is a matter of personal taste, but I like XML as a markup format), because it is environment-agnostic (I prefer to edit in Vim, but Eclipse includes great XML tooling and integration with version-control systems, and thus is also an excellent choice for a Docbook-editing environment), and because, thanks to the Apache FOP and Batik projects, it s very easy to create PDF documents which include SVG images.. Still, I could never forget the initial pain involved in setting up Docbook, and so I ve created.. , a project to reduce this initial overhead for new users.. From the project description:.. Docbook is a great technology for producing beautiful, structured documents.. However, learning to use it and its associated tools can involving a steep learning curve.. This project aims to solve that problem by packaging everything needed to begin producing rich documents with Docbook.. Specifically, it packages the Docbook schemas and XSL stylesheets, and the Apache FOP library and related dependencies.. It also provides an Ant script for compilation, and includes sample Docbook files.. Thus, the project assembles all of the components required to allow the user to begin creating PDF documents from Docbook XML sources quickly and easily.. I spent a long time looking for a similar project, and, surprisingly, didn t find too much in this space.. I did find.. one project.. which has precisely the same goals, but it relies on.. make.. and other command-line tools typically found on Unix platforms.. Right now, I m on Windows, and Cygwin has been problematic since Vista, so Ant and Java are a preferred solution.. Also, by using Ant and Java, it is very easy to  ...   that would provide better integration.. I then tried.. Cygwin.. , which attempts to create a unix subsystem in windows.. Cygwin would give me X11, Xterm, bash, screen, vim, and pretty much everything else I require.. Unfortunately, Cygwin has its own problems.. Specifically, Cygwin attempts to be POSIX-compliant, and the way it encodes Unix filesystem permissions on NTFS, while totally innocuous in Windows XP, seems to conflict with Windows Vista s User Access Controls.. This is not something that the.. Cygwin developers seem to have have any interesting in fixing.. The result of this is you get files that are extermely move and copy, and very difficult to delete using the Windows shell.. So Cygwin was not an effective solution for me.. I finally tried one last thing, a combination of tools: Xming, MSYS, MINGW, and GNUwin32.. MSYS and MinGW appear to be mostly intended for allowing easier porting of software written for a unix environment to Windows, however MSYS provides a very productive unix-flavored shell environment inside of Windows.. GNUwin32 ports many familiar GNU tools to Windows, so I have a fairly rich userland: rxvt as a terminal emulator, vim, bash, and a unix-flavored environment.. This is not ideal, as it is not easily extensible, and doesn t support any concept of packages, but it seems it s the best I can do on Windows Vista x64.. A Very Late Review of Windows Vista.. Let me start with the things that I like about Vista.. When I develop software, I primarily target the web as a platform, and so I like the fact that I can install a very wide range of browsers for testing: IE 6, 7, and 8 (Microsoft publishes free Virtual PC images for testing different versions of IE), Chrome, Safari, Firefox and Opera.. It s very convenient not to have to fire up a VM for testing.. Hardware support is top-notch.. The audio and video stack feel polished and mature.. I ve never had an instance of them failing.. And, all of my special hardware works, including the multi-touch touchscreen, and pressure-sensitive pen.. Now for the bad stuff.. I want to keep this very brief, because it s no longer interesting to complain about how bad Vista is But it is so bad, it is virtually unusable, and I want to make it clear why:.. I seem to get an endless stream of popups from the OS asking if I really want to do the things I ask it to do.. This transition is visually jarring, and very annoying.. It maints the behaviour that it had back in Windows 95, where if a file is opened by some application and you attempt to move it, it will fail without meaningful feedback.. This can be overcome with File Unlocker, but it s crazy that this simple usability issue has never been addressed.. File operations are so slow as to be unusable.. Before attempting to move a file with Windows Explorer, it attempts to count every single file you re going to move before it attempts to move it.. This makes no sense to me at all, because moving a file in NTFS, I believe is just a matter of changing a pointer in the parent.. If you use the Windows cmd shell, with the move operation, or Cygwin/MSYS s mv operation, then the move takes place instantaneously.. It does not attempt to count every file before moving the parent directory.. So, this really is just a windows shell issue.. It has nothing to do with the underlying filesystem.. So, as bad a user experience as moving files using Windows Explorer is, it s much much worse when you discover that it s completely unnecessary.. Out of the box, my disk would thrash constantly, even when wasn t doing anything.. I eventually turned off Windows Defender, Windows Search, and the Indexer service, and things have gotten better.. It takes about 2 minutes to boot, and then another 5 minutes before it is at all usable, as it loads all of the crapware at boot.. I ve gone through msconfig and disable a lot of the crapware preinstalled by HP, and this has gotten somewhat better, but out of the box it was just atrocious.. Windows Explorer will sometimes go berserk and start pegging my CPU.. Overall it just feels incredibly, horribly slow.. I feel like it cannot keep up with the flow of my thoughts, or my simple needs for performance and responsiveness.. It does not offer a good user experience.. Only drivers signed by Microsoft allowed on 64-bit Vista.. This is a huge WTF.. All in all, Vista sucks and I hate it.. Maybe Windows 7 will be better.. Right now, though, a real alternative is necessary, because Vista offers such a poor experience that it is simply not usable for me.. I had forgotten what it was like to want to do physical violence to my computer.. No longer.. Really, at this point I feel like I should have gotten a Mac.. Last Word: the Karmic Koala.. Ubuntu 9.. 10 Karmic Koala came out this past Thursday, and I just tried it out using a live USB.. I m happy to say that it sucks significantly less on my hardware than 9.. 04! In particular, audio now seems to work flawlessly: playback through speakers, headphones, and headphone jack sensing all work fine; recording through the mic jack works out of the box.. I didn t try Skype, but the new messaging application shipped with Ubuntu, Empathy, is able to do voice and video chat with Google Chat clients using the XMPP protocol.. I had mixed success with Empathy.. It wouldn t work at all with video chat; I think this had to do with an issue involving my webcam, as Cheese and Ekiga also had trouble using it.. With regard to pure audio chat, it worked fine in one case, but in another it crashed the other user s Google Chat client.. Yikes.. So, clearly there are still some bugs that need to be worked out with respect to the client software.. I now feel much more optimistic about the state of the Linux audio stack.. I wasn t really sure that the ALSA/Pulseaudio stack was converging on something that would eventually be stable and functional enough to rival the proprietary stacks on Windows and Mac OS X.. The improvements I have seen on my hardware, though, are very encouraging, and so I think I may go back to Ubuntu after all.. At the very least, I m going to hook up a dual boot.. Wow, that was long post! I hope parts of it might be generally interesting to other who may be ina similar situation.. In the future, though, I m going to try to focus more on software development issues.. October 8th, 2009.. JavaScript 1.. 7 Difficulties.. For my course in compilers, we have a semester-long project in which we build a compiler for a DSL called WIG.. We can target whatever language and platform we want, and there are certain language features of JavaScript, specifically the Rhino implementation, that I thought could be leveraged very productively.. I was excited to have the opportunity to shed the burden of browser incompatibilities, and to drill down into the more advanced features of the JavaScript language.. Unfortunately, I ve also encountered some initial challenges, some of which are irreconcilable.. One thing that I was excited about was E4X.. In WIG, you re able to define small chunks of parameterizable HTML code, which maps almost 1-1 to E4X syntax.. Unfortunately, Rhino E4X support is.. broken on Ubuntu Interpid and Jaunty.. Adding the missing libraries to the classpath has not resolved the issue for me.. On the other hand, the workaround of getting Rhino 1.. 7R2 from upstream, which comes with out-of-the-box E4X support, is unacceptable, as this Rhino version seems to introduce a regression, in which it throws a NoMethodFoundException when calling methods on HTTP servlet Request and Response objects.. I ll file a bug report about this later, but the immediate effect is that I m stuck with the Ubuntu version, and without E4X support.. Language Incompatibilities.. Destructuring assignments.. were introduced first in JavaScript 1.. 7.. While array destructuring assignments have worked fine for me, unfortunately, I haven t been able to get object destructuring assignments to work under any implementation but Spidermonkey 1.. 8.. Rhino 1.. 7R1 and 1.. 7R2, as well as Spidermonkey 1.. 0 both fail to interpret the example in Mozilla s documentation:.. https://developer.. org/en/New_in_JavaScript_1.. 7#section_25.. This is disappointing, as it would have provided an elegant solution to several problems presented by WIG..

    Original link path: /category/uncategorized/
    Open archive

  • Title: Yet Another JavaScript Blog » Thinkpad W520 Multi-Monitor nVidia Optimus with Bumblebee on Ubuntu 12.04
    Descriptive info: Comments (3).. Ricky Goldsmith.. Did vdpau acceleration ever work for you, for mplayer/vlc ?.. I have tried all means (Discrete mode, Optimus mode).. None of them work.. Following your blog, I can get a virtual display.. I also made my virtual display my primary display, to try and see if I can get vdpau decode from my video players.. Still  ...   vdpau and export that display to :8 and let screenclone clone the output to display :0.. 0 ?.. Piotr Kołaczkowski.. This doesn t work with resolutions higher than 1920 1600.. The virtual XCRTC screen or the Intel chip seem to not support them.. AmirY.. Do you have any updates on getting it to work on 13.. 04 (I m on Kubuntu)?..

    Original link path: /2012/08/04/thinkpad-w520-multi-monitor-nvidia-optimus-with-bumblebee-on-ubuntu-12-04/
    Open archive


    Archived pages: 122