domingo, 8 de agosto de 2010

Ipython Zmq Status

IPython currently has all the things proposed for GSoC and so many features that improved the functionality.
The design was changed in a little features in Scipy conference 2010 at Austin TX to improve comunication's system; the idea was write a module that let comunication between various frontends in the same kernel, the madule was called kernelmanager.

The work done during the project is summarized within these points:
1) IPython was divided in a two process model.
2) The two process model is using zeromq (pyzmq) for comunnication between frontend and kernel.
3) The messages were standardized and transported under pyzmq using json.
4) The frontend supports indentation and colors in outputs (Syntax highlighting).
5) raw_input is captured.
6) tab-completion and magic commands supported.
7) IPython output "Out [1] :", subprocess outputs and stdout/stderr outputs also supported.
8) Kernelmanager support to comunication using pyzmq.
9) Implemented codebreacker to indent code in terminal and get a block of code to send to kernel.
10) Every method and funtion documented.
11) Test files.
12) Indexation of inputs like "In[#]" that was implemented creating a new request`s message type called prompt_request.
13) support Crtl+C to stop proccess in kernel, sending signal SIGINT with the kernel's pid.
the kernel's pid was gotten using a new request's message called pid_request.

TODO:
1) Improve testing.
2) Improve the multiuser features.
3) New message type called object_info_request that allows us to get object`s info from kernel.

you can download and test the code from http://github.com/omazapa/ipython (see installation section)

If any question or suggestion arise, please write me to andresete.chaos@gmail.com or write in ipython`s mailing lists.

miƩrcoles, 14 de julio de 2010

IPython GSoC Mid-Term Status

Hi all!
this is the current status of ipython-zmq

1) ipython was divided in a two process model
2) the two process model is using zeromq (pyzmq) for comunnication between frontend and kernel
3) the messages were standarized and trasported under pyzmq using json
4) the frontend support indentation and colors in outputs (Syntax highlighting)
5) raw_input is captured
6) tab-completion and magic commands supported
7) ipython output "Out [1] :", subprocess outputs and stdout /stderr outputs also supported

TODO:

1) prompt indexation
2) highlighting in tracebacks
3) kernel and frontend magics (project specific magic commands)

martes, 6 de julio de 2010

lunes, 5 de julio de 2010

Testing ipython zmq code

After installing deps and code from repositories you can run kernel and frontend this way:


Start an ipython kernel.
$python ipython/IPython/zmq/kernel.py
Starting the kernel...
On: tcp://127.0.0.1:5555 tcp://127.0.0.1:5556
Use Ctrl-\ (NOT Ctrl-C!) to terminate.

Start an ipython frontend.
$python ipython/IPython/zmq/frontend.py
In [1]:


Running ipython code, tab completion and some magics are working fine.
The next step is to make all magics support and write our own magics to these kernel and frontend.


jueves, 22 de abril de 2010

How to install Ipython-zmq code ? (just for linux right now )

* c/c++ compiler (gcc)
* git client
* cython
* uuid-dev
* python-dev
* autotools (automake - autoconf - libtool)
* pkg-config



 install under debian/ubutu:
 #apt-get install  git g++ cython uuid-dev python-dev automake autoconf libtool
 # apt-get build-dep cython



install under redhat/fedora/OpenSuSE with yum:
 #yum install  git g++ cython uuid-dev automake autoconf libtool



 building  pyzmq:
 * download zmq libraries
     $git clone git://github.com/sustrik/zeromq2.git



  then run
     $cd zeromq2
     $./autogen.sh



     if you don`t have root permissions
     $./configure --prefix=your_favorite_installation_prefix
     $make
     $make install



     if you have root permissions
    
      #./configure
      #make
      #make install

      NOTE: I suggest you install like root



download pyzmq code:
     $git clone  http://github.com/ellisonbg/pyzmq.git
     $cd pyzmq
     edit setup.cfg
  
      example: setup.cfg



     if you don`t have  root permissions
      [build_ext]
      # Edit these to point to your installed zeromq library and header dirs.
      library_dirs = your_zeromq2_installation_prefix/lib
      include_dirs = your_zeromq2_installation_prefix/include



     if you have installed zeromq2 with root permission and default configuration
      [build_ext]
      # Edit these to point to your installed zeromq library and header dirs.
      library_dirs = /usr/local/lib
      include_dirs = /usr/local/include

     if you don`t have  root permissions
     $python setup.py  build
     $python setup.py install --prefix=your_favorite_ installation_prefix



     if you have root permissions
     #python setup.py install

   more info to install pyzmq in http://www.zeromq.org/bindings:python

Installing IPythonZmq.

Dowload with git
$git clone http://github.com:omazapa/ipython.git
$cd ipython
$python setup.py build
like root
#python setupegg.py develop

Done!



miƩrcoles, 21 de abril de 2010

Possible future directions

I think one of the possible directions is to write a complex system to parallel processing with others modules like pympi, which let massive processing load, in different kernels with a system client/server comunication managed by zmq.
At this moment, ipython has wrote a system for parallel processing in mpi but using a twisted platform, the idea is to update iptyhon to  be supported by python3, with twisted this is no possible, so zeromq is the best way to do it.

Other feature zeromq has is the performance in data transmission and cython is writing support to python 3 code

lunes, 12 de abril de 2010

Porting IPython to a two process model using ZeroMQ

Abstract
----------

IPython's execution in a command-line environment will be ported to a two process model using the ZeroMQ library for inter-process communication. This will:

- prevent an interpreter crash from destroying the user session,
- allow multiple clients to interact simultaneously with a single interpreter
- allow IPython to reuse code for local execution and distributed computing (DC)
- give us a path for Python3 support, since ZeroMQ supports Python3 while Twisted (what we use today for DC) does not.

Deliverables

* A user-facing frontend that provides an environment like today's command-line IPython but running over two processes, with the code execution kernel living in a separate process and communicating with the frontend by using the ZeroMQ library.

* A kernel that supports IPython's features (tab-completion, code introspection, exception reporting with different levels of detail, etc), but listening to requests over a network port, and returning results as JSON-formatted messages over the network.

Project description

Currently IPython provides a command-line client that executes all code in a single process, and a set of tools for distributed and parallel computing that execute code in multiple processes (possibly but not necessarily on different hosts), using the Twisted asynchronous framework for communication between nodes. For a number of reasons, it is desirable to unify the architecture of the local execution with that of distributed computing, since ultimately many of the underlying abstractions are similar and should be reused. In particular, we would like to:

- Have even for a single user a 2-process model, so that the environment where code is being input runs in a different process from that which executes the code. This would prevent a crash of the Python interpreter executing code (because of a segmentation fault in a compiled extension or an improper access to a C library via ctypes, for example) from destroying the user session.

- Have the same kernel used for executing code locally be available over the network for distributed computing. Currently the Twisted-using IPython engines for distributed computing do not share any code with the command-line client, which means that many of the additional features of IPython (tab completion, object introspection, magic functions, etc) are not available while using the distributed computing system. Once the regular command-line environment is ported to allowing such a 2-process model, this newly decoupled kernel could form the core of a distributed computing IPython engine and all capabilities would be available throughout the system.

- Have a route to Python3 support. Twisted is a large and complex library that does currently not support Python3, and as indicated by the Twisted developers it may take a while before it is ported (http://stackoverflow.com/questions/172306/how-are-you-planning-on-handling-the-migration-to-python-3). For IPython, this means that while we could port the command-line environment, a large swath of IPython would be left 2.x-only, a highly undesirable situation. For this reason, the search for an alternative to Twisted has been active for a while, and recently we've identified the ZeroMQ (http://www.zeromq.org, zmq for short) library as a viable candidate. Zmq is a fast, simple messaging library written in C++, for which one of the IPython developers has written Python bindings using Cython (http://www.zeromq.org/bindings:python). Since Cython already knows how to generate Python3-compliant bindings with a simple command-line switch, zmq can be used with Python3 when needed.

As part of the Zmq Python bindings, the IPython developers have already developed a simple prototype of such a two-process kernel/frontend system (details below). I propose to start from this example and port today's IPython code to operate in a similar manner. IPython's command-line program (the main 'ipython' script) executes both user interaction and the user's code in the same process. This project will thus require breaking up IPython into the parts that correspond to the kernel and the parts that are meant to interact with the user, and making these two components communicate over the network using zmq instead of accessing local attributes and methods of a single global object.

Once this port is complete, the resulting tools will be the foundation (though as part of this proposal I do not expect to undertake either of these tasks) to allow the distributed computing parts of IPython to use the same code as the command-line client, and for the whole system to be ported to Python3. So while I do not intend to tackle here the removal of Twisted and the unification of the local and distributed parts of IPython, my proposal is a necessary step before those are possible.

Project Details




As part of the ZeroMQ bindings, the IPython developers have already developed a simple prototype example that provides a Python execution kernel (with none of IPython's code or features, just plain code execution) that listens on zmq sockets, and a frontend based on the InteractiveConsole class of the code.py module from the Python standard library. This example is capable of executing code, propagating errors, performing tab-completion over the network and having multiple frontends connect and disconnect simultaneously to a single kernel, with all inputs and outputs being made available to all connected clients (thanks to zqm's PUB sockets that provide multicasting capabilities for the kernel and to which the frontends subscribe via a SUB socket).

** we have all example code in

* http://github.com/ellisonbg/pyzmq/blob/completer/examples/kernel/kernel.py

* http://github.com/ellisonbg/pyzmq/blob/completer/examples/kernel/completer.py

* http://github.com/fperez/pyzmq/blob/completer/examples/kernel/frontend.py


All of this code already works, and can be seen in this example directory from the ZMQ python bindings:

* http://github.com/ellisonbg/pyzmq/blob/completer/examples/kernel


Based on this work, I expect to write a stable system for ipython kernel with ipython standards, error control,crash recovery system and general configuration options, also standardize defaults ports or auth system for remote connection etc.

The crash recovery system, is a ipython kernel module for when it fails unexpectedly, you can retrieve the information from the section, this will be based on a log and a lock file to indicate when the kernel was not closed in a proper way.