Shahzad Bhatti Welcome to my ramblings and rants!

May 30, 2008

Challenges of multicore programming

Filed under: Computing — admin @ 12:28 pm

The multicore processors have put parallel and concurrent programming at the forefront. In A Fundamental Turn Toward Concurrency in Software article in Dr Dobb’s warned programmers that the free lunch is over. This has spurred ongoing debates about future languages and how they will rescue software development. Here are a few features that are being postulated as the panacea of multicores:

Multi-threading

Java, C++, C# camp has had support for native threads for a while and they claim that these native threads will allow programmers to take advantage of the multitude of cores. After having done multi-threading for over ten years, I must admit multi-threading is not easy. Concurrent programming based on threads and locks is error prone and can lead to nasty problems of deadlocks, starvation, idle spinning, etc. As number of cores reaches thousands or millions, the shared memory will become single point of bottleneck.

Software Transactional Memory

Languages like Haskell and Clojure are adding support for STM, which treat memory like database and use optimistic transactions. Instead of locking, each thread can change any data, but when it commits it verifies the data has not been changed and retries transaction if the data is changed. This area is relative new, but resemebles more like shared memory so it probably will face same scalability issues.

Actor Based Model with Message Passing

I learned about Actor based programming from reading Agha Gul’s papers in school. In this model each process owns a private mailbox and processes communicate to each other by sending messages to each other. Languages like Erlang and more recently Scala use Actor based model for parallel programming. This model is very scalable because there is no locking involved. Also, the messages are immutable in Erlang (though may not be in Scala), so data corruption is not likely.The only drawback of this model is that spliting your application or algorithm into independent processes that communicate via message passing is not trivial.

TupleSpace based model

In tuple space, processes communicate to each other by storing messages in tuple spaces. The tuple space provides simple APIs like get, put, read and eval. Where get/read are blocking operations to implement dataflow based application and eval creates a new process to execute a task and store result in the tuple space. I built a similar system called JavaNOW for my Ph.D. project. Though, there are a number of open source and commercial frameworks availeble such as JavaSpaces, GigaSpaces, Blitz.

Fork/Join

This is standard pattern from grid computing, also known as Master-Slave, Leader/Follower, JobJar, and somewhat similar to Map/Reduce where a master process adds work in a queue and workers pick up the work and store result in another queue and master process picks up the result. It also uses other constructs of concurrent/parallel programming such as barriers, futures, etc. There are plenty of libraries and frameworks available for this such as Globus, Hadoop, MPI/MPL, PVM. Java 7 will also have Fork/Join framework.

Messaging Middleware(ESB)

Messaging middlewares allow another way to build actor based model where a thread listens on queue and other threads/processes communicate via pub/sub. Java’s JMS has a lot of support for this and frameworks like Mule, Camel, ServiceMix can help build applications that use SEDA (Staged Event-driven Architecture). Though, this is more prominent in service oriented architectures but there is not reason why it can’t be used for leveraging multicores.

Compiler based parallelism

High performance Fortran or Parallel Fortran use data flow analysis to create parallel applications. Though, this might be simplest solution but often the code generated is not as efficient as hand coded algorithm.

Conclusion

Though, parallel and concurrent programming is new model for vast majority of programmers, but this problem is not new to the high performance computing area. They have already tackled this problem and know what works and what does not work. In early 90s, I worked on Silicon Graphics machine that built large computers based on NUMA architecture and provided shared memory model for computing. Due to inherent nature of shared memory, they were not as scalable. As opposed to those, IBM built SP1 and SP2 systems that used messaging passing based APIs (MPL similar to MPI), where you could add as many nodes as you need. These systems were much more scalable. These two different systems quite nicely show difference between shared based model and messaging based model of programming. A key advantage of message passing is that it avoids any kind of locking. This is the reason I like Erlang because it supports immutability and efficient message passing. However, biggest challenge I found with parallel programming was to breaking an algorithm or application down to smaller parts that run in parallel. Amdahl’s law shows that speedup in an application from multiple processors is limited by its serial nature. Donuld Knuth in recent interview pointed that vast majority of the programs are serial in nature. In the end, there are no simple solutions in the horizon, though there are some proven techniques from HPC and functional langages that can help.

May 23, 2008

Chasing the bright lights

Filed under: Computing — admin @ 10:59 am

I have been IT enthusiast and professional for over twenty years and I consider my self to be “Innovators” type when it comes to technology and programming. I have seen a number of changes over the years. One of my manager used to say that we like to chase bright lights. Unfortunately, many of the trends die off naturally or fail to cross the chasm. Here are some of those things that I chased that died off or faded away:

Mainframe

I worked on a mainframes for a couple of years early in my career and did programming in COBOL and CICS. Though, mainframes are not quite dead, but I am actually glad that they have faded away.

VAX/OpenVMS

I also worked on VAX/VMS and OpenVMS systems, they were rock solid and I am a bit disappointed that they could not evolve.

PASCAL/ICON/PROLOG/FORTRAN/BASIC

BASIC was my first programming language, but hasn’t used it since early DOS days. I learned PASCAL in college and found it better than C, but in real life didn’t see a lot of usage. I also learned ICON and PROLOG, but didn’t find any real use and have not used FORTRAN since old VAX days.

NUMA based servers

In early 90s, Silicon Graphics built very powerful machines based on NUMA architecture, that gave shared memory model of programming on a number of processors. Unfortunately, they had some limits to how big they could become and not to mention all the locking slowed down shared memory access. Around the same time, IBM SP2 built systems based on message passing (MPL), which were a lot more scalable. These two programming models are now coming to the front row as multicore programming is becoming essential. I get to play both of these systems at Fermilab, Metromail, TransUnion and Argonne Lab. I am sure, lessons from these early models will not be lost and message passing based programming will win.

PowerPC NT/Solaris/AIX

Back in 94-95, Motorola created these PowerPC machines that could run NT or Solaris and I thought they were pretty cool. So, I spent all my savings and bought one. Unfortunately, Sun abandoned Solaris soon after and Microsoft did same. I finally got AIX to run on it, but it just didn’t go anywhere.

BeOS

Back when Apple was looking for next generation operating systems for Macs, they seriously considered BeOs, which was pretty cool. I played with it and bought a few books to program in it. Unfortunately, Steve Jobs went with his NextStep system and BeOS just faded away.

Java Ring

In early days of Java, Sun announced huge support for Smart cards, which came with strong cryptography and small memory. I spent several hundred dollars and bought SDK kits, Java rings and smart cards from ibutton.com. This too just didn’t cross the chasm and died off.

CORBA

I did a lot of CORBA in 90s, which I thought was a lot better than socket based networking I did before that. They had a lot of limitations and despite having standards, there was very hard to integrate. Now, they have moved out of the limelight.

Voyager

Voyager was an ORB from ObjectSpace that had very nice features for agent based computing. It also had nice concepts like Facets or dynamic composition. It inspired me to built my own ORB “JavaNOW” that supported seamless networking and agent based computing. It too failed to cross the chasm and died off.

EJB

One of the difficult thing with CORBA was maintenance of servers, because each server ran in its own process. I had to write pretty elaborated monitoring and liefcycle system to start these servers in right order and restart if they fail. I thought, application servers like Weblogic and Websphere solved that problem. A lot of people who were not familiar with the distributed systems tried to use EJBs as local methods and failed miserably. I built proper value objects before there was pattern after them and used EJBGEN to create all stubs. I actuallyI don’t miss the elaborate programming, but still see need for application servers to host services.

MDA/MDD/Software Factories

In early 2000, I was very interested in model driven architecture and development and thought that they may improve software development process. Though, I had seen failure of CASE tools in early 90s, but I thought these techniques were better. I still hope better generative and metaprogramming help cut some of the development cost.

Aspect Oriented Programming

I leared about AOP in 90s when I was looking for some PhD projects, it became popular in early 2000. Now, it too has been faded away.

Methodologies/UML

Agile methodologies have killed a number of methodologies like Rational Unified Process (RUP), Catalyst, ICONIX, UML modeling, etc. I never liked heavy weight processes, but do see value of some high level architecture and modeling.

May 20, 2008

Rebooting philosophy in Erlang

Filed under: Erlang — admin @ 10:49 am

I just read “Let It Crash” Programming, which talks about how Erlang is designed as a fault tolerant language from ground up. I have been learning Erlang since Joe Armstrong’s book came out and have heard Joe a few times talk about fault tolerance. Steve Vionski has also talked about Erlang: It.s About Reliability in flame war between him and Ted Neward. For me, Erlang reminds of Microsoft Windows, i.e. when Windows stops working I just reboot the machine. Erlang does the same thing, when some process fails, it just restarts the processes. About sixteen years ago, I started my career in old VAX, Mainframe and UNIX environments and my managers used to say that he never had to restart Mainframe if something fails, but somehow bugs on Windows get fixed after reboot. When I worked at Fermilab in mid 90s, we had server farms of hundreds of machines and fault tolerance was quite important. Though, Google didn’t invent server farms, but it scaled them to new level, where failure of machines don’t stop the entire application. Erlang takes the same philosophy to the programming language. Obviously, in order to make truly fault tolerant application, the Erlang processes will need to be spawned on separate machines. Erlang’s support of CSP style communication and distributed computing such as OTP makes it trivial. You can further increase fault tolerance and high availibility by using machines on separate racks, networks, power sources or data centers. No wonder, Facebook is using Erlang in its Chat application.

May 19, 2008

Sprint like Hell!

Filed under: Methodologies — admin @ 9:44 pm

I think Sprint is wrong metaphore for iteration, because managers seem to think that it means developers will run like hell to finish the work. Though, my project has adopted Scrum recently, but it is far from the true practices, principles and values of agility. For past many months, we have been developing like hell and as a result our quality has suffered.

May 16, 2008

Integrating with lots of Services and AJAX

Filed under: Java — admin @ 4:27 pm

About year and half ago, I was involved in a system rewrite for a project that communicated with dozens of data sources. The web application allowed users to generate ad-hoc reports and perform various transactions on the data. The original project was in Perl/Mason, which proved to be difficult to scale because demands of adding more data sources. Also, the performance of the system became problematic because the old system waited for all data before displaying them to the user. I was assigned to redesign the system, and I rebuilt the new system using lightweight J2EE based stack including Spring, Hibernate, Spring-MVC, Sitemesh and used DWR, Prototype, Scripaculous for AJAX based web interface. In this blog, I am going to focus on high level design especially integration with oher services.

High level Design

The system consisted of following layers:

  • Domain layer – This layer defined domain classes. Most of the data structures were just aggregates of heterogenous data with a little structure. Also, users wanted to add new data sources with minimal time and effort, so a number of generic classes were defined to represent them.
  • Data provider layer – This layer provided services to search and agregate data from different sources. Basically, each provider published the query data that it required and output data that it supported.
  • Data aggregation layer – This layer collected data from multiple data sources and allowed UI to pull the data as it became available.
  • Service layer – This layer provided high level operations for quering, reporting and transactions.
  • Presentation – This layer provided web based interface. This layer used significant use of AJAX to show the data incremently.

Domain layer

Most of the data services simply returned rows of data with little structure and commonality. So, I designed a general purpose classes to represent rowsets and columns:

MetaField

represents a meta information for each atomic data element used for reporting purpose. It stored information such as name and type of the field.

DataField

represents both MetaField and its value. The value could be one of following:

  • UnInitailized – This is a marker interface that signifies that data is not yet populated. It was used to indicate visual clues to the users for the data elements that are waiting for response from the data providers.
  • DataError – This class stores an error while accessing the data item. This class also had subclasses like
    • UnAvailable – which means data is not available from the service
    • TimeoutError – service timedout
    • ServerError – any server side unexpected error.
  • Value from the data provider.

DataSink

The data provider allowed clients to specify the size of data that is needed, however many of the data providers had internal limits of size of the data that they could return. So, it required multiple invocations of underlying services to the data providers. The DataSink interface allowed higher order services to consume the data from each data provider in stream fashioned, which enhanced UI interaction and minimized the memory required to buffer the data from the service providers. Here is the interface for DataSink callback :

 /**
  * This method allows clients to consume a set of tuples. The client returns true when it wants to stop processing more data and
  * no further calls would be made to the providers
  * @param set - set of new tuples received from the data providers
  * @return - true if client wants to stop
  */
 public boolean consume(TupleSet set);
 
 /**
  * This method notifies client that data provider is done with fetching all required data
  */
 public void dataEnded();
 

DataProvider

interface is used for integration to each of the data service

 public interface DataProvider {
 
     public int getPriority();
 
     public MetaField[] getRequestMetaData();
                                                                                                                                                       
     public MetaField[] getResponseMetaData();
                                                                                                                                                       
     public DataSink invoke(Map context, DataField[] inputParameters) throws DataProviderException;
 

DataLocator

This class used a configuration file to map all data locators needed for the query.

   public interface DataProviderLocator {
         public DataProvider[]> getDataProviders(MetaField[] input, MetaField[] output);
   }
 

DataExecutor

This class used Java’s Executors to send off queries to different data providers in parallel.

   public interface DataExecutor {
         public void execute();
   }
 

The implementation of this class manages the dependency of the data providers and runs the in separate thread.

DataAggregator

This class stored results of all data providers in a rowset format where each row was array of data fields. It was
consumed by the AJAX clients which polled for new data.

   public interface DataAggregator {
       public void add(DataField[] keyFields, DataField[] valueFields);
       public DataField[] keyFields();
       public DataField[] dequeue(DataField[] keyFields) throws NoMoreDataException;
   }
 

The first method is used by the DataExecutor to add data to the aggregator. In our application, each of the report had some kind of a key field such as SKU#. In some cases that key was passed by the user and in other cases it was queried before the actual search. The second method returned those key fields. The third method was used by the AJAX clients to query new data.

Service Layer

This layer abstraction for communcating with underlying data locators, providers and aggregators.

   public interface DataProviderService{
       public DataAggregator search(DataField[] inputFields, DataField[] outputFields);
   }
 

End to End Example

 +--------------------------+
 | Client selects           |
 | input/output fields      |
 | and sends search request |
 | View renders initial     |
 | table.                   |
 +--------------------------+
         |          ^
         V          |
 +-----------------------+
 | Web Controller        |
 | creates DataFields    |
 | and calls service.    |
 | It then stores        |
 | aggregator in session.|
 +-----------------------+
         |          ^
         V          |
 +------------------------+
 | Service calls locators,|
 | and executor.          |
 |                        |
 | It returns aggregator  |
 |                        |
 +------------------------+
         |          ^
         V          |
 +------------------------+
 | Executor calls         |
 | providers and adds     |
 | responses to aggregator|
 |                        |
 +------------------------+
         |          ^
         V          |
 +---------------------+
 | Providers call      |
 | underlying services |
 | or database queries |
 +---------------------+
 
 
 
 +------------------------+
 | Client sends AJAX      |
 | request for new data   |
 | fields. View uses      |
 | $('cellid').value to   |
 | update table.          |
 +------------------------+
         |
         V
 +-----------------------+
 | Web Controller        |
 | calls aggregator      |
 | to get new fields     |
 | It cleans up aggreg.  |
 | when done.            |
 +-----------------------+
         |
         V
 +----------------+
 | Aggregator     |
 +----------------+
 
  1. Client selects types of reports, where each report has slightly different input data fields.
  2. Client opens the application and selects the data fields he/she is interested in.
  3. Client hits search button
  4. Web Controller intercepts the request and converts form into an array of input and output data field objects.
  5. Web Controller calls search method of DataProviderService and stores the DataAggregator in the session. Though, our application used
    multiple servers, we used sticky sessions and didn’t need to provide replication of the search results. The controller then sent back the
    keyfields to the view.

  6. The view used the key data to populate the table for report. The view then starts polling the server for the incoming data.
  7. Each poll request finds new data and returns to the view, which then populates the table cells. When all data is polled, the aggregator
    throws NoMoreDataException and view stops polling.

  8. Also, view stops polling after two minutes in case service stalls. In that case, aggregator from the session is cleared by another background
    thread.

Lessons Learned

This design has served well as far as performance and extensibility, but we had some scalability issues because we allowed output of one provider to be used as input to another provider. Thus, some of the threads were idle, so we added some smarts into Executors to spawn threads only when there is input data available. Also, though some of the data sources provided asynchronous services, most didn’t and for others we had to use the database. If services were purely asynchronous, we could have used reactive style of concurrency and used only two threads per search instead of almost one thread for each provider, where one thread would send requests to all services and another thread would poll response from all unfinished providers and add it to the aggregator if it’s finished. I think this kind of application is much better suited for language like Erlang, which provides extremely lightweight processes and you can easily launch hundreds of thousand processes. Also, Erlang has builtin support for tuples used in our application.

May 10, 2008

My blog has been hacked and spammed!

Filed under: Computing — admin @ 10:23 am

I have been blogging since 2002, and I started with Blojsom, which was based on Java/JSP. It worked pretty well, but when I switched my ISP a couple of years ago, I could not run my own Tomcat so I switched to WordPress. Though, it is much more user friendly, but I had a lot of problems with SPAM in comments and finally I just disabled. However, lately Spammers have gotten much more sophisticated and they added SPAM to my header.php and footer.php and even modified and added blog entries in the database. As a result, my blog has been removed from the search engines. I manually fixed the contents, but I found that it gets changed every day. I am not quite sure how they are getting access. For now, I have changed my database password and added some ways to detect file changes. Let me know if you have any ways to fix this for good.

May 7, 2008

How not to handle errors for an Ecommerce site!

Filed under: Computing — admin @ 5:36 pm

I have been reading and hearing a great deal about Scala language, which is an object-oriented and functional hybrid language and is implemented on JVM and CLR. So, I decided to buy the only book available from Artima. I had been a long subscriber of Artima, so when I ordered I logged in and entered my credit card information. However when I hit enter, I got page with cryptic error message. This was not what I expected. I hoped to get a link to the book, instead I had no idea what happened. Worst, there was no indication on what to do or who to contact. I found a phone number from the website, but when I called the number, the phone company told me that it was disconnected. The only email I could find from the site was for webmaster, so I emailed webmaster and explained what happened. But, webmaster didn’t get back. I knew Bill Venners ran the site, so I searched the site for his email and finally got his email from google. I emailed Bill and explained the situation. Bill was quick to respond and I finally got the link to the ebook within half hour. Though, Bill explained that the bug was not common and they have very high standards for testing. But, I was aggravated by the way errors were handled without giving any clues to the customer. Clearly, building a site that handles credit cards require higher standards. In case of errors when performing transaction, I expect a clear message what went wrong, whether my credit card was charged (which was charged in my case), and some kind of contact page, email address, IM/chat or a phone number. I also like email confirmation that you generally get from ecommerce sites after the transaction.
In the end, I was glad to get the PDF for the book. I am finding Scala is a really cool language with features from a number of languages like Lisp, Smalltalk, Eiffel, Haskell, Erlang, ML, etc. Most of all, it takes advantage of Java’s Ecosystem that has tons of libraries and tools.

IT Sweatshops

Filed under: Computing — admin @ 11:50 am

Though, I have blogged on Sweatshops a few years ago, but recently I was talking to a friend who works for a company that was nominated as “15 best places to work for” in Seattle Metropolitan’s May issue, but I found out that the company’s HR department pushed employees to vote to get into the top list. What I found was that the company is not bad place to work, but like many other companies is a sort of sweatshop. Having worked for more than sixteen years and over ten companies as an employee and consultant, I could relate to it as well. The truth is that IT departments in most companies are sweatshops, where workers are pushed to make incredible hours and sacrifice nights and weekends. In fact, my current employer is no different. In retrospect, I have found five big reasons that contribute to such environments:

  1. Taylorism – Despite popularity of agile methodologies and the claim that Agility has crossed the chasm, I have found command-control structure based on Taylorism mentality is still rampant in most places. The management in most places think that giving workers impossible deadlines will force them to work harder, which implies putting 60+hours/week. I have seen companies claim to be agile and promote working smart over working hard, but their practices were no different. These people try to increase velocity by any means (e.g. overtime). I heard one manager brag how his team has higher velocity than the consultants they hired to learn agile practices, ignoring the fact that team was putting 70-80 hours/week.
  2. Offshoring/H1 – Though, this may not be politically correct thing to say, but offshoring and H1 visas lowered the values of software developers. Despite Paul Graham’s essay on productivity, most management measure programmers by their rates. Also, most of H1 visa holders are not married and tend to work longer until they get their green cards.
  3. Dot com Boom/Bomb – Though, this may be nostalgia, but I feel programmers had more respect before the dot com boom/bomb. Though, I admit during boom, programmers were over valued, but they have not gained prior status.
  4. No overtime – The fact that IT workers are not eligible for overtime adds incentive for management to ask for any amount of work. Though, I have seen some lawsuits from game companies and IBM, but things would be a lot different if this rule changed. This probably be one of the reason, I would be open to creating a worker union for IT folks.
  5. 24/7 With the widespread usage of Internet, all companies want to become 24/7 shops even when they didn’t need it. Though, this has added convenience for mass consumers, but IT folks have to pay for it. For example, in my company developers are responsible for operations, which add considerable work.

Conclusion
I don’t see this trend will subside easily in near future. Most companies measure dedication and promotion by how many hours one put. To most employers and recruiters, the word “family friendly environment” is a code word for a candidate who is not committed. The only solutions to sweatshop mentality I see are adopting agile values, changing overtime policies or becoming independent contractors and make your own rules. Though, agile practices offer glimmer hope to address this, but bad habits are hard to break. Many companies adopt agile processes without adopting key values that they promote. In early 90s when Total Quality Management was in vogue, I saw my company changed titles of managers to facilitators and titles of directors to coaches, and yet their cultured remained the same. Today, I see many companies change titles of team leads or project managers to scrum masters and titles of managers to product owners, and think they are doing agile.

Powered by WordPress