Shahzad Bhatti

May 10, 2010

Building a stock quote server in Erlang using Ejabberd, XMPP, Bosh, Exmpp, Strophe and Yaws

Filed under: Erlang — admin @ 1:40 pm

Recently, I have been building a stock quote server at work that publishes financial data using using Ejabberd, XMPP, PubSub, Exmpp and Bosh on the server side and Strophe library on the web application front. I will describe a simplified implementation of the quote server using Yahoo Quotes.

Installation

Download Ejabberd and go through the installation wizad. You will be asked your host name, admin account/password and whether ejabberd would be running in a clustered environment. For this tutorial, we will be running ejabberd on a single. Once installed, you can start the ejabbered server using

 /Applications/ejabberd-2.1.3/bin/ejabberdctl start
 

As, I am using Mac, the actual path on your machine may be different. The ejabbered comes with a web baesd admin tool, that you can access using

 http://<your-host-name>:5280/admin
 

and you would be able to see available nodes, users, etc.


Registering Users

We will be creating two users: producer and consumer, where the former would be used for publishing stock quotes and latter would be used for subscribing quotes on the web side, i.e.,

 sudo /Applications/ejabberd-2.1.3/bin/ejabberdctl register producer  producer
 sudo /Applications/ejabberd-2.1.3/bin/ejabberdctl register consumer  consumer
 

Debuging with Psi

You can debug XMPP communications using a jabber client such as Psi, which you can download. After you download, you can install and specify your local hostname as a server, e.g.



You can then login using consumer@<your-host-name> with password consumer. As, we will be using PubSub protocol, you can discover available nodes or topics using General->Service Discovery from the menu, e.g.


Downloading Sample Code

I have stored all code needed for this example on http://github.com/bhatti/FQPubSub, that you can checkout using:

 git clone git@github.com:bhatti/FQPubSub.git
 

The sample code depends on exmpp, lhttpc, jsonerl, and yaws modules so after downloading the code, checkout dependent modules using

 git submodule init
 git submodule update
 

Above commands will checkout dependent modules in deps directory.

Building Sample Code

Before building, ensure you have make and autoconf tools installed, then replace <paraclete.local> with your <your-host-name> in docroot/index.html and src/quote_utils.hrl. Then type following command

 make
 

to build all sample code and dependent libraries

Starting Web Server

Though, the web code including Srophe library and Javascript can be run directly in the browser, but you can start Yaws to serve the application as follows:

 erl -pa ebin deps/exmpp/ebin/ deps/lhttpc/ebin/ deps/yaws/ebin -boot start_sasl -run web_server start 
 

Note, that the web server will be continuously running, so you can open a separate shell before typing above command.

Publishing Quotes

Create two separate shells and type following command in first shell:

   erl -pa ebin deps/exmpp/ebin/ deps/lhttpc/ebin/ deps/yaws/ebin -boot start_sasl -run quote_publisher start AAPL
 

and following command in second shell

   erl -pa ebin deps/exmpp/ebin/ deps/lhttpc/ebin/ deps/yaws/ebin -boot start_sasl -run quote_publisher start IBM
 

Above commands will start Erlang processes, that will poll Yahoo Quotes every second and publish the quotes on the node AAPL and IBM respectively.

Next point your browser to http://<your-host-name>:8000/, and add “IBM” and “AAPL” symbols, you would then see quotes for both symbols, e.g.

Code under the hood

Now that you are able to run the example, let’s take a look at the code how it works:

Client library for Yahoo Finance

Though, at work we use our own real time stock quote feed, but for this sample I implemented stock quote feed using Yahoo Finance. The src/yquote_client.hrl and src/yquote_client.erl define client API for accessing Yahoo finance service. Here is the Erlang code for requesting the quote using HTTP request and parsing it:

  1 %%%-------------------------------------------------------------------
 
  2 %%% File : yquote_client.erl
  3 %%% Author : Shahzad Bhatti
  4 %%% Purpose : Wrapper Library for Yahoo Stock Quotes
 
  5 %%% Created : May 8, 2010
  6 %%%-------------------------------------------------------------------
  7 
  8 -module(yquote_client).
 
  9 
 10 -author('bhatti@plexobject.com').
 11 
 12 -export([
 13          quote/1
 14         ]).
 
 15 
 16 -record(quote, {
 17         symbol,
 18         price,
 19         change,
 20         volume,
 
 21         avg_daily_volume,
 22         stock_exchange,
 23         market_cap,
 24         book_value,
 25         ebitda,
 26         dividend_per_share,
 
 27         dividend_yield,
 28         earnings_per_share,
 29         week_52_high,
 30         week_52_low,
 31         day_50_moving_avg,
 32         day_200_moving_avg,
 
 33         price_earnings_ratio,
 34         price_earnings_growth_ratio,
 35         price_sales_ratio,
 36         price_book_ratio,
 37         short_ratio}).
 38 
 
 39 
 40 
 41 quote(Symbol) ->
 42     inets:start(),
 43     {ok,{_Status, _Headers, Response}} = http:request(get, {url(Symbol), []},
 
 44         [{timeout, 5000}], [{sync, true}]),
 45 
 46     Values = re:split(Response, "[,\r\n]"),
 47     #quote{
 
 48         symbol = list_to_binary(Symbol),
 49         price = to_float(lists:nth(1, Values)),
 50         change = to_float(lists:nth(2, Values)),
 51         volume = to_integer(lists:nth(3, Values)),
 
 52         avg_daily_volume = to_integer(lists:nth(4, Values)),
 53         stock_exchange = lists:nth(5, Values), % to_string
 54         market_cap = to_float(lists:nth(6, Values)), % B
 
 55         book_value = to_float(lists:nth(7, Values)),
 56         ebitda = to_float(lists:nth(8, Values)), % B
 57         dividend_per_share = to_float(lists:nth(9, Values)),
 
 58         dividend_yield = to_float(lists:nth(10, Values)),
 59         earnings_per_share = to_float(lists:nth(11, Values)),
 60         week_52_high = to_float(lists:nth(12, Values)),
 61         week_52_low = to_float(lists:nth(13, Values)),
 
 62         day_50_moving_avg = to_float(lists:nth(14, Values)),
 63         day_200_moving_avg = to_float(lists:nth(15, Values)),
 64         price_earnings_ratio = to_float(lists:nth(16, Values)),
 65         price_earnings_growth_ratio = to_float(lists:nth(17, Values)),
 
 66         price_sales_ratio = to_float(lists:nth(18, Values)),
 67         price_book_ratio = to_float(lists:nth(19, Values)),
 68         short_ratio = to_float(lists:nth(20, Values))}.
 69 
 
 70 url(Symbol) ->
 71     "http://finance.yahoo.com/d/quotes.csv?s=" ++ Symbol ++ "&f=l1c1va2xj1b4j4dyekjm3m4rr5p5p6s7".
 72 
 
 73 to_float(<<"N/A">>) ->
 74     -1;
 75 to_float(Bin) ->
 76     {Multiplier, Bin1} = case bin_ends_with(Bin, <<$B>>) of
 
 77         true ->
 78             {1000000000, bin_replace(Bin, <<$B>>, <<>>)};
 79         false ->
 80             case bin_ends_with(Bin, <<$M>>) of
 
 81                 true ->
 82                     {1000000, bin_replace(Bin, <<$M>>, <<>>)};
 83                 false ->
 84                     {1,Bin}
 
 85             end
 86     end,
 87     L = binary_to_list(Bin1),
 88     list_to_float(L) * Multiplier.
 
 89 
 90 
 91 
 

Note that I am omitting some code in above listing, as I just wanted to highlight HTTP request and parsing code.

Publishing the Stock Quote

I used exmpp library to communicate with the XMPP server in Erlang. Here is the code for publishing the quotes using Bosh/XMPP protocol:

  1 %%%-------------------------------------------------------------------
 
  2 %%% File : quote_publisher.erl
  3 %%% Author : Shahzad Bhatti
  4 %%% Purpose : OTP server for publishing quotes
 
  5 %%% Created : May 8, 2010
  6 %%%-------------------------------------------------------------------
  7 -module(quote_publisher).
 
  8 
  9 -export([
 10     start/1,
 11     start/5,
 12     stop/1]).
 13 
 
 14 -export([init/5]).
 15 
 16 -include_lib("quote_utils.hrl").
 17 
 18 -record(state, {session, jid, service=?TEST_XMPP_PUBSUB, symbol}).
 
 19 
 20 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 21 %% APIs
 22 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 
 23 start(Symbol) ->
 24     start(?TEST_XMPP_SERVER, ?TEST_XMPP_PORT, ?PRODUCER_USERNAME,
 25         ?PRODUCER_PASSWORD, Symbol).
 
 26 
 27 start(Host, Port, User, Password, Symbol) ->
 28     spawn(?MODULE, init, [Host, Port, User, Password, Symbol]).
 
 29 
 30 stop(Pid) ->
 31     Pid ! stop.
 32   
 33 init(Host, Port, User, Password, Symbol) ->
 
 34     {ok, {MySession, MyJID}} = quote_utils:connect(Host, Port, User, Password),
 35     State = #state{session=MySession, jid=MyJID, symbol = Symbol},
 
 36     create_symbol_node(State),
 37     loop(State).
 38 
 39 loop(#state{session=MySession, jid=_MyJID, service = _Service,
 
 40         symbol = _Symbol}=State) ->
 41     receive
 42         stop ->
 43             quote_utils:disconnect(MySession);
 
 44         Record = #received_packet{packet_type=message, raw_packet=_Packet} ->
 45             loop(State);
 46         Record ->
 
 47             loop(State)
 48     after 2000 ->
 49         publish_quote(State),
 50         loop(State)
 
 51     end.
 52 
 53 create_symbol_node(#state{session=MySession, jid=MyJID, service = Service,
 
 54         symbol = Symbol}) ->
 55     IQ = exmpp_client_pubsub:create_node(Service, Symbol),
 56     PacketId = exmpp_session:send_packet(MySession, exmpp_stanza:set_sender(IQ, MyJID)),
 
 57     PacketId2 = erlang:binary_to_list(PacketId),
 58     receive #received_packet{id=PacketId2, raw_packet=Raw} ->
 
 59       case exmpp_iq:is_error(Raw) of
 60         true -> {error, Raw};
 61         _ -> ok
 
 62       end
 63     end.
 64   
 65 publish_quote(#state{session=MySession, jid=MyJID, service = Service, symbol = Symbol}) ->
 
 66     Quote = yquote_client:quote(Symbol),
 67     JsonQuote = ?record_to_json(quote, Quote),
 68     M = exmpp_xml:element(?QUOTE_DATA),
 
 69     IQ = exmpp_client_pubsub:publish(Service, Symbol, exmpp_xml:append_cdata(M,
 70             JsonQuote)),
 71     Xml = exmpp_stanza:set_id(exmpp_stanza:set_sender(IQ, MyJID), Symbol),
 
 72     PacketId = exmpp_session:send_packet(MySession, exmpp_stanza:set_sender(IQ, MyJID)),
 73     PacketId2 = erlang:binary_to_list(PacketId),
 
 74     receive #received_packet{id=PacketId2, raw_packet=Raw} ->
 75       case exmpp_iq:is_error(Raw) of
 
 76         true -> error;
 77         _ -> ok
 78       end
 79     end.
 
 80 
 81 
 82 
 

In above code, a process is created for each symbol, which periodically polls stock quote and publishes it to the XMPP node using pubsub/bosh protocol. Note that a unique node is created for each symbol and node must be created before anyone can publish or subscribe. Also, note that publish/subscribe APIs use request/ack protocol, so after sending the request, the process retrieves the acknowledgement of the request.

Here are some utility functions used by the publisher:

  1 -module(quote_utils).
 
  2   
  3 -include_lib("quote_utils.hrl").
  4 
  5 -export([
  6     init_session/2,
 
  7     connect/4,
  8     disconnect/1]).
  9 
 10 bosh_url(Host, Port) ->
 
 11     "http://" ++ Host ++ ":" ++ integer_to_list(Port) ++ "/http-bind".
 12 
 
 13 
 14 connect(Host, _Port, User, Password) ->
 15     safe_start_apps(),
 
 16     MySession = exmpp_session:start({1,0}),
 17     exmpp_xml:start_parser(), %% Create XMPP ID (Session Key):
 18     MyJID = exmpp_jid:make(User, Host, random),
 
 19     %% Create a new session with basic (digest) authentication:
 20     exmpp_session:auth_basic_digest(MySession, MyJID, Password),
 21     
 
 22     
 23     {ok, _StreamId, _Features} = exmpp_session:connect_BOSH(MySession, bosh_url(Host, 5280), Host, []),
 
 24     try quote_utils:init_session(MySession, Password)
 25     catch
 26         _:Error -> io:format("got error: ~p~n", [Error]), {error, Error}
 
 27     end,
 28     {ok, {MySession, MyJID}}.
 29 
 30 init_session(MySession, Password) ->
 
 31     %% Login with defined JID / Authentication:
 32     try exmpp_session:login(MySession, "PLAIN")
 33     catch
 
 34         throw:{auth_error, 'not-authorized'} ->
 35         %% Try creating a new user:
 36         io:format("Register~n",[]),
 37         %% In a real life client, we should trap error case here
 
 38         %% and print the correct message.
 39         exmpp_session:register_account(MySession, Password),
 40         %% After registration, retry to login:
 
 41         exmpp_session:login(MySession)
 42     end,
 43     %% We explicitely send presence:
 44     exmpp_session:send_packet(MySession, exmpp_presence:set_status(exmpp_presence:available(), "Ready to publish!!!")),
 
 45     ok.
 46 
 47 disconnect(MySession) ->
 48     exmpp_session:stop(MySession).
 49 
 
 50 safe_start_apps() ->
 51     try start_apps()
 52     catch
 53         _:Error -> io:format("apps already started : ~p~n", [Error]), {error, Error}
 
 54     end.
 55 
 56 start_apps() ->
 57     ok = application:start(exmpp),
 58     ok = application:start(crypto),
 59     ok = application:start(ssl),
 
 60     ok = application:start(lhttpc).
 61 
 

Note that above code auto-registers users, which is not recommended for production use.

Javascript code using Strophe library

The web application depends on jQuery, Strophe and Strophe Pubsub. These libraries are included in docroot directory that are imported by index.html. The Strophe library and ejabbered 2.1.3 version supports cross domain scripting so that bosh service here doesn’t need to be on the same domain/port, but it must have a /crossdomain.xml policy file that allows access from wherever index.html lives. The Javascript initializes the connection parameter as follows (you would have to change Host):

   1 <script type="text/javascript">
 
   2     // The BOSH_SERVICE here doesn't need to be on the same domain/port, but
 
   3     // it must have a /crossdomain.xml policy file that allows access from
 
   4     // wherever crossdomain.html lives.
   5     // TODO: REPLACE <paraclete.local> with your <host-name>
 
   6     var HOST = 'paraclete.local';
   7     var JID = 'consumer@' + HOST;
 
   8     var PASSWORD = 'consumer';
   9     var BOSH_SERVICE = 'http://' + HOST + ':5280/http-bind'; //'/xmpp-httpbind'
 
  10     var PUBSUB = 'pubsub.' + HOST;
  11     var connection = null;
 
  12     var autoReconnect = true;
  13     var hasQuotes = [];
  14     var subscriptions = [];
 
  15   
  16     function log(msg) {
  17         $('#log').append('<div></div>').append(document.createTextNode(msg));
 
  18     }
  19   
  20     function rawInput(data) {
  21         //log('RECV: ' + data);
 
  22     }
  23     
  24     function rawOutput(data) {
  25         //log('SENT: ' + data);
 
  26     }
  27     function onQuote(stanza) {
  28         //log('onQuote###### ' + stanza);
 
  29         try {
  30             $(stanza).find('event items item data').each(function(idx, elem) {
  31                 quote = jQuery.parseJSON($(elem).text());
 
  32                 //{"price":235.86,"change":-10.39,"volume":59857756,"avg_daily_volume":20775600,"stock_exchange":[78,97,115,100,97,113,78,77],"market_cap":2.146e+11,
 
  33                 //"book_value":43.257,"ebitda":1.5805e+10,"dividend_per_share":0.0,"dividend_yield":-1,"earnings_per_share":11.796,"week_52_high":272.46,"week_52_low":119.38,
 
  34                 //"day_50_moving_avg":245.206,"day_200_moving_avg":214.119,"price_earnings_ratio":20.88,"price_earnings_growth_ratio":1.05,"price_sales_ratio":4.38,
 
  35                 //"price_book_ratio":5.69,"short_ratio":0.7}
  36                 if (hasQuotes[quote.symbol] != undefined) {
 
  37                     $('price_' + quote.symbol).innerHTML = quote.price;
  38                     $('change_' + quote.symbol).innerHTML = quote.change;
  39                     $('volume_' + quote.symbol).innerHTML = quote.volume;
 
  40                 } else {
  41                     hasQuotes[quote.symbol] = true;
  42                     $('#quotesTable > tbody:last').append('<tr id="quote_' +
 
  43                         quote.symbol + '"><td>' + quote.symbol +
  44                         '</td><td id="price_' + quote.symbol + '">' + quote.price +
 
  45                         '</td><td id="change_' + quote.symbol + '" class="class_change_' + quote.symbol + '">' +
  46                         quote.change + '</td><td id="volume_' +
 
  47                         quote.symbol + '">' +
  48                         quote.volume + '</td></tr>');
  49                 }
 
  50 
  51                 if(quote.change < 0) {
  52                     $('.class_change_' + quote.symbol).css('color', 'red');
 
  53                 } else {
  54                     $('.class_change_' + quote.symbol).css('color', 'green');
 
  55                 }
  56             });
  57         } catch (e) {
  58             log(e)
 
  59         }
  60         return true;
  61     }
  62 
 
  63     function handleSubscriptionChange (stanza) {
  64         //log("***handleSubscriptionChange Received: " + stanza);
 
  65     }
  66         
  67     function onConnect(status) {
  68         if (status == Strophe.Status.CONNECTING) {
 
  69             log('Strophe is connecting.');
  70         } else if (status == Strophe.Status.CONNFAIL) {
  71             log('Strophe failed to connect.');
 
  72             $('#connect').get(0).value = 'connect';
  73         } else if (status == Strophe.Status.DISCONNECTING) {
 
  74             log('Strophe is disconnecting.');
  75         } else if (status == Strophe.Status.DISCONNECTED) {
  76             if (autoReconnect) {
 
  77                 log( "Streaming disconnected. Trying to reconnect...", METHODNAME );
  78                 connection.connect($('#jid').get(0).value, $('#pass').get(0).value, onConnect);
  79                 log( "Streaming reconnected.", METHODNAME );
 
  80             } else {
  81                 log('Strophe is disconnected.');
  82                 $('#connect').get(0).value = 'connect';
 
  83                 //publishEvent( "streamingDisconnected" );
  84             }
  85         } else if (status == Strophe.Status.CONNECTED) {
 
  86             log('Strophe is connected.');
  87             //log('QUOTE_BOT: Send a message to ' + connection.jid + ' to talk to me.');
 
  88             connection.addHandler(onMessage, null, 'message', null, null, null);
  89             connection.send($pres().tree());
 
  90             publishEvent( "streamingConnected" );
  91         }
  92     }
  93 
 
  94     function subscribe(symbol) {
  95         if (subscriptions[symbol]) return;
  96         try {
 
  97             connection.pubsub.subscribe(JID, PUBSUB, symbol, [], onQuote, handleSubscriptionChange);
  98             subscriptions[symbol] = true;
  99             log("Subscribed to " + symbol);
 
 100         } catch (e) {
 101             alert(e)
 102         }
 103     }
 104     function unsubscribe(symbol) {
 
 105         if (!subscriptions[symbol]) return;
 106         try {
 107             connection.pubsub.unsubscribe(JID, PUBSUB, symbol, handleSubscriptionChange);
 108             subscriptions[symbol] = false;
 
 109             log("Unsubscribed from " + symbol);
 110         } catch (e) {
 111             alert(e)
 112         }
 
 113     }
 114   
 115     function onMessage(msg) {
 116         var to = msg.getAttribute('to');
 
 117         var from = msg.getAttribute('from');
 118         var type = msg.getAttribute('type');
 119         var elems = msg.getElementsByTagName('body');
 
 120   
 121         if (type == "chat" && elems.length > 0) {
 122             var body = elems[0];
 
 123             log('QUOTE_BOT: I got a message from ' + from + ': ' + Strophe.getText(body));
 124             var reply = $msg({to: from, from: to, type: 'chat'}).cnode(Strophe.copyElement(body));
 125             connection.send(reply.tree());
 
 126             log('QUOTE_BOT: I sent ' + from + ': ' + Strophe.getText(body));
 127         }
 128         // we must return true to keep the handler alive.
 
 129         // returning false would remove it after it finishes.
 
 130         return true;
 131     }
 132  
 133     $(document).ready(function () {
 
 134         connection = new Strophe.Connection(BOSH_SERVICE);
 135         connection.rawInput = rawInput;
 136         connection.rawOutput = rawOutput;
 137         connection.connect(JID, PASSWORD, onConnect);
 138         //connection.disconnect();
 
 139         $('#add_symbol').bind('click', function () {
 140             var symbol = $('#symbol').get(0).value;
 
 141             subscribe(symbol);
 142         });
 143     });
 144 
 145 </script>
 146 
 
 

When the document is loaded, the connection to the ejabberd server is established. Here is the form and table that is used to add subscription and display current quote information for the symbols:

  1 <form name='symbols'>
 
  2     <label for='symbol'>Symbol:</label>
  3     <input type='text' id='symbol'/>
 
  4     <input type='button' id='add_symbol' value='add' />
 
  5 </form>
  6 <hr />
  7 <div id='log'></div>
 
  8 <table id="quotesTable" width="600" border="2" bordercolor="#333333">
 
  9     <thead>
 10         <tr>
 11             <th>Symbol</th>
 
 12             <th>Price</th>
 13             <th>Change</th>
 14             <th>Volume</th>
 
 15         </tr>
 16     </thead>
 17     <tbody>
 18     </tbody>
 
 19 </table>
 20 
 

When the form is submitted, it calls subscribe method, which in turn sends request to the ejabbered server for subscription. When a new quote is received, it calls onQuote function, which inserts a row in the table when a new symbol is added or updates the quote information if it already exists.

Conclusion

The ejabberd, XMPP, exmpp, Bosh and Strophe provides a robust and mature solution for messaging and are especially suitable for web applications that want to build highly scalable and interactive applications. Though, above code is fairly simple, but same design principles can be used to support large number of stock quotes updates. As, we need to send stock quotes from tens of thousands symbols for every tick within a fraction of a second, the Erlang provides very scalable solution, where each symbol is simply served by an Erlang process. Finally, I am still learning more about Ejabberd’s clustering, security, and other features so that it can truly survive the production load, so I would love to hear any feedback you might have with similar systems.

References


March 17, 2010

Smarter Email appender for Log4j with support of duplicate-removal, summary-report and JMX

Filed under: Computing — admin @ 5:06 pm

I have been using SMTPAppender for a while to notify developers when something breaks on the production site and for most part it works well. However, due to some misconfiguration or service crash it can result in large number of emails. I was struck by similar problem at work when my email box suddently got tons of emails from the production site. So I decided to write a bit intelligent email appender. My goals for the appender were:

  • Throttle emails based on some configured time
  • Remove duplicate emails
  • Support JMX for dynamic configuration
  • Provide summary report with count of errors and their timings

I created FilteredSMTPAppender class that extends SMTPAppender. The FilteredSMTPAppender defines a nested class Stats for keeping track of errors. For each unique exception, it creates an instance of Stats, that stores the first and last occurrence of this exception as well as count. The Stats class uses hash of stack trace to identify unique exceptions, however it ignores first line, which often stores some dynamic information. FilteredSMTPAppender registers iteslf as MBean so that it can be configured at runtime. It overrides append method to capture the event and overrides checkEntryConditions to add filtering. It also changes the layout so that the summary count of error messages are added to the footer of email message.

The FilteredSMTPAppender uses a number of helper classes such as ServiceJMXBeanImpl for MBean definition, LRUSortedList to keep fixed cache of exceptions. Here is listing of LRUSortedList and ServiceJMXBeanImpl.

Listing of FilteredSMTPAppender.java

   1 package com.plexobject.log;
 
   2 
   3 import java.beans.PropertyChangeEvent;
   4 import java.beans.PropertyChangeListener;
   5 import java.util.Comparator;
 
   6 import java.util.Date;
   7 
   8 import javax.mail.MessagingException;
   9 
 
  10 import org.apache.commons.lang.builder.EqualsBuilder;
  11 import org.apache.commons.lang.time.FastDateFormat;
  12 
  13 import org.apache.log4j.Layout;
 
  14 import org.apache.log4j.net.SMTPAppender;
  15 import org.apache.log4j.spi.LoggingEvent;
  16 
  17 import com.plexobject.jmx.JMXRegistrar;
 
  18 import com.plexobject.jmx.impl.ServiceJMXBeanImpl;
  19 import com.plexobject.metrics.Metric;
  20 import com.plexobject.metrics.Timer;
 
  21 import com.plexobject.util.Configuration;
  22 import com.plexobject.util.LRUSortedList;
  23 
  24 public class FilteredSMTPAppender extends SMTPAppender {
 
  25 
  26     private static final String SMTP_FILTER_MIN_DUPLICATE_INTERVAL_SECS = "smtp.filter.min.duplicate.interval.secs";
  27     private static final int MAX_STATS = Configuration.getInstance().getInteger("smtp.filter.max", 100);
 
  28     private static int MIN_DUPLICATE_EMAILS_INTERVAL = Configuration.getInstance().getInteger(SMTP_FILTER_MIN_DUPLICATE_INTERVAL_SECS,
  29             60); // 1 minute
  30     private static final Date STARTED = new Date();
 
  31     private static final FastDateFormat DATE_FMT = FastDateFormat.getInstance("MM/dd/yy HH:mm");
  32 
  33     final static class Stats implements Comparable<Stats> {
 
  34 
  35         final int checksum;
  36         final long firstSeen;
 
  37         long lastSeen;
  38         long lastSent;
  39         int numSeen;
 
  40         int numEmails;
  41 
  42         Stats(LoggingEvent event) {
  43             StringBuilder sb = new StringBuilder();
 
  44             String[] trace = event.getThrowableStrRep();
  45             for (int i = 1; i < trace.length && i < 20; i++) { // top 20 lines
 
  46                 // of trace
  47                 sb.append(trace[i].trim());
  48             }
  49             this.checksum = sb.toString().hashCode();
 
  50             firstSeen = lastSeen = System.currentTimeMillis();
  51             numSeen = 1;
  52         }
  53 
  54         boolean check() {
 
  55             long current = System.currentTimeMillis();
  56             long elapsed = current - lastSent;
  57 
  58             numSeen++;
 
  59             lastSeen = current;
  60 
  61             if (elapsed > MIN_DUPLICATE_EMAILS_INTERVAL * 1000) {
  62                 lastSent = current;
 
  63                 numEmails++;
  64                 return true;
  65             } else {
 
  66                 return false;
  67             }
  68         }
  69 
 
  70         @Override
  71         public boolean equals(Object object) {
  72             if (!(object instanceof Stats)) {
 
  73                 return false;
  74             }
  75             Stats rhs = (Stats) object;
  76             return new EqualsBuilder().append(this.checksum, rhs.checksum).isEquals();
 
  77 
  78         }
  79 
  80         @Override
  81         public int hashCode() {
 
  82             return checksum;
  83         }
  84 
  85         @Override
 
  86         public String toString() {
  87             return " (" + checksum + ") occurred " + numSeen + " times, " + numEmails + " # of emails, first @" + DATE_FMT.format(new Date(firstSeen)) + ", last @" + DATE_FMT.format(new Date(lastSeen)) + " since server started @" + DATE_FMT.format(STARTED);
 
  88         }
  89 
  90         @Override
  91         public int compareTo(Stats other) {
 
  92             return checksum - other.checksum;
  93         }
  94     }
  95 
 
  96     final static class StatsCmp implements Comparator<Stats> {
  97 
 
  98         @Override
  99         public int compare(Stats first, Stats second) {
 100             return first.checksum - second.checksum;
 
 101         }
 102     }
 103     private static final LRUSortedList<Stats> STATS_LIST = new LRUSortedList<Stats>(
 
 104             MAX_STATS, new StatsCmp());
 105     private LoggingEvent event;
 106     private ServiceJMXBeanImpl mbean;
 107     private Layout layout;
 
 108 
 109     public FilteredSMTPAppender() {
 110         mbean = JMXRegistrar.getInstance().register(getClass());
 111         mbean.addPropertyChangeListener(new PropertyChangeListener() {
 112 
 
 113             @Override
 114             public void propertyChange(PropertyChangeEvent event) {
 115                 try {
 116                     if (event != null && SMTP_FILTER_MIN_DUPLICATE_INTERVAL_SECS.equalsIgnoreCase(event.getPropertyName())) {
 
 117                         MIN_DUPLICATE_EMAILS_INTERVAL = Integer.parseInt((String) event.getNewValue());
 118                     }
 119                 } catch (Exception e) {
 120                     e.printStackTrace();
 121                 }
 
 122             }
 123         });
 124 
 125     }
 126 
 127     public void append(LoggingEvent event) {
 
 128         this.event = event;
 129         if (layout == null) {
 130             layout = getLayout();
 131         }
 
 132         super.append(event);
 133     }
 134 
 135     protected boolean checkEntryConditions() {
 136         final Timer timer = Metric.newTimer(getClass().getSimpleName() + ".checkEntryConditions");
 
 137         try {
 138             boolean check = true;
 139             if (event != null) {
 
 140                 Stats newStats = new Stats(event);
 141                 Stats stats = STATS_LIST.get(newStats);
 142                 if (stats == null) {
 143                     stats = newStats;
 
 144                     STATS_LIST.add(stats);
 145                 } else {
 146                     check = stats.check();
 147                 }
 148                 if (check) {
 
 149                     setMessageFooter(stats);
 150                 }
 151             }
 152             return check && super.checkEntryConditions();
 
 153         } finally {
 154             timer.stop();
 155         }
 156     }
 157 
 
 158     private void setMessageFooter(Stats stats) {
 159         String message = event.getMessage().toString();
 160 
 161         final String footer = "\n\n-------------------------\n" + message + " - " + stats;
 
 162 
 163         if (layout != null) {
 164             setLayout(new Layout() {
 165 
 
 166                 @Override
 167                 public void activateOptions() {
 168                     layout.activateOptions();
 169 
 170                 }
 
 171 
 172                 @Override
 173                 public String format(LoggingEvent evt) {
 174                     return layout.format(evt);
 175                 }
 
 176 
 177                 @Override
 178                 public String getFooter() {
 179                     return footer;
 180                 }
 
 181 
 182                 @Override
 183                 public boolean ignoresThrowable() {
 184                     return layout.ignoresThrowable();
 
 185                 }
 186             });
 187         }
 188     }
 189 }
 190 
 
 191 
 

Listing of ServiceJMXBeanImpl.java

   1 package com.plexobject.util;
 
   2 
   3 import java.util.ArrayList;
   4 import java.util.Collection;
   5 import java.util.Collections;
 
   6 import java.util.Comparator;
   7 import java.util.Iterator;
   8 import java.util.List;
 
   9 import java.util.ListIterator;
  10 
  11 import org.apache.log4j.Logger;
  12 
 
  13 
  14 public class LRUSortedList<T> implements List<T> {
 
  15     private static final Logger LOGGER = Logger.getLogger(LRUSortedList.class);
  16     private final int max;
 
  17     private final Comparator<T> comparator;
  18 
  19     private final List<Pair<Long, T>> list = new ArrayList<Pair<Long, T>>();
 
  20     private final List<Pair<Long, Integer>> timestamps = new ArrayList<Pair<Long, Integer>>();
 
  21 
  22     // comparator to sort by timestamp
  23     private static final Comparator<Pair<Long, Integer>> CMP = new Comparator<Pair<Long, Integer>>() {
 
  24         @Override
  25         public int compare(Pair<Long, Integer> first, Pair<Long, Integer> second) {
 
  26             if (first.getFirst() < second.getFirst()) {
  27                 return -1;
  28             } else if (first.getFirst() > second.getFirst()) {
 
  29                 return 1;
  30             } else {
  31                 return 0;
 
  32             }
  33         }
  34     };
  35 
  36     public LRUSortedList(int max, Comparator<T> comparator) {
 
  37         this.max = max;
  38         this.comparator = comparator;
  39     }
  40 
 
  41     @Override
  42     public boolean add(T e) {
  43         if (list.size() > max) {
 
  44             removeOldest();
  45         }
  46         // add object
  47         long timestamp = System.nanoTime();
 
  48         int insertionIdx = Collections.binarySearch(this, e, comparator);
  49         if (insertionIdx < 0) {// not found
 
  50             insertionIdx = (-insertionIdx) - 1;
  51             list.add(insertionIdx, new Pair<Long, T>(timestamp, e));
  52         } else {
 
  53             // found
  54             list.set(insertionIdx, new Pair<Long, T>(timestamp, e));
  55         }
 
  56 
  57         // as timestamps are sorted, we just remove the oldest (first)
  58         if (timestamps.size() > max) {
 
  59             timestamps.remove(0);
  60         }
  61         // update timestamp
  62         Pair<Long, Integer> t = new Pair<Long, Integer>(timestamp, insertionIdx);
 
  63         timestamps.add(t);
  64         return true;
  65     }
  66 
 
  67     @Override
  68     public void add(int index, T element) {
  69         throw new UnsupportedOperationException(
 
  70                 "can't add element at arbitrary index, must use add to keep sorted order");
  71     }
  72 
  73     @Override
 
  74     public boolean addAll(Collection<? extends T> c) {
  75         for (T e : c) {
 
  76             add(e);
  77         }
  78         return c.size() > 0;
  79     }
 
  80 
  81     @Override
  82     public boolean addAll(int index, Collection<? extends T> c) {
 
  83         throw new UnsupportedOperationException(
  84                 "can't add element at arbitrary index, must use addAll to keep sorted order");
  85     }
 
  86 
  87     @Override
  88     public void clear() {
  89         list.clear();
 
  90     }
  91 
  92     @SuppressWarnings("unchecked")
  93     @Override
 
  94     public boolean contains(Object e) {
  95         if (e == null) {
  96             return false;
 
  97         }
  98         try {
  99             return Collections.binarySearch(this, (T) e, comparator) >= 0;
 
 100         } catch (ClassCastException ex) {
 101             LOGGER.error("Unexpected type for contains "
 102                     + e.getClass().getName() + ": " + e);
 
 103             return false;
 104         }
 105     }
 106 
 107     @Override
 
 108     public boolean containsAll(Collection<?> c) {
 109         for (Object e : c) {
 110             if (!contains(e)) {
 
 111                 return false;
 112             }
 113         }
 114         return true;
 
 115     }
 116 
 117     @Override
 118     public T get(int index) {
 119         Pair<Long, T> e = list.get(index);
 
 120         return e != null ? e.getSecond() : null;
 121     }
 122 
 123     public T get(Object e) {
 
 124         int ndx = indexOf(e);
 125         if (ndx >= 0) {
 126             return get(ndx);
 127         }
 
 128         return null;
 129     }
 130 
 131     @SuppressWarnings("unchecked")
 132     @Override
 
 133     public int indexOf(Object e) {
 134         try {
 135             return Collections.binarySearch(this, (T) e, comparator);
 
 136         } catch (ClassCastException ex) {
 137             LOGGER.error("Unexpected type for get " + e.getClass().getName()
 138                     + ": " + e);
 
 139             return -1;
 140         }
 141     }
 142 
 143     @Override
 144     public boolean isEmpty() {
 
 145         return list.isEmpty();
 146     }
 147 
 148     @Override
 149     public Iterator<T> iterator() {
 
 150         final Iterator<Pair<Long, T>> it = list.iterator();
 151         return new Iterator<T>() {
 
 152 
 153             @Override
 154             public boolean hasNext() {
 155                 return it.hasNext();
 
 156             }
 157 
 158             @Override
 159             public T next() {
 160                 Pair<Long, T> e = it.next();
 
 161                 return e.getSecond();
 162             }
 163 
 164             @Override
 165             public void remove() {
 
 166                 it.remove();
 167             }
 168         };
 169     }
 170 
 171     @Override
 
 172     public int lastIndexOf(Object o) {
 173         for (int i = list.size() - 1; i >= 0; i--) {
 174             T e = get(i);
 
 175             if (e.equals(o)) {
 176                 return i;
 177             }
 178         }
 179         return -1;
 
 180     }
 181 
 182     @Override
 183     public ListIterator<T> listIterator() {
 184         final ListIterator<Pair<Long, T>> it = list.listIterator();
 
 185         return buildListIterator(it);
 186     }
 187 
 188     @Override
 189     public ListIterator<T> listIterator(int index) {
 
 190         final ListIterator<Pair<Long, T>> it = list.listIterator(index);
 191         return buildListIterator(it);
 192     }
 
 193 
 194     @SuppressWarnings("unchecked")
 195     @Override
 196     public boolean remove(Object e) {
 
 197         try {
 198             int ndx = Collections.binarySearch(this, (T) e, comparator);
 199             if (ndx >= 0) {
 
 200                 remove(ndx);
 201                 return true;
 202             } else {
 203                 return false;
 
 204             }
 205 
 206         } catch (ClassCastException ex) {
 207             LOGGER.error("Unexpected type for remove " + e.getClass().getName()
 
 208                     + ": " + e);
 209             return false;
 210         }
 211     }
 
 212 
 213     @Override
 214     public T remove(int index) {
 215         Pair<Long, T> e = list.remove(index);
 
 216         Pair<Long, Integer> t = new Pair<Long, Integer>(e.getFirst(), 0);
 217 
 218         int insertionIdx = Collections.binarySearch(timestamps, t, CMP);
 
 219         if (insertionIdx >= 0) {
 220             timestamps.remove(insertionIdx);
 221         }
 222         return e != null ? e.getSecond() : null;
 
 223     }
 224 
 225     @Override
 226     public boolean removeAll(Collection<?> c) {
 
 227         boolean all = true;
 228         for (Object e : c) {
 229             all = all && remove(e);
 
 230         }
 231         return all;
 232     }
 233 
 234     @Override
 235     public boolean retainAll(Collection<?> c) {
 
 236         boolean changed = false;
 237         Iterator<?> it = c.iterator();
 238         while (it.hasNext()) {
 
 239             Object e = it.next();
 240             if (!contains(e)) {
 241                 it.remove();
 242                 changed = true;
 243             }
 
 244         }
 245         return changed;
 246     }
 247 
 248     @Override
 
 249     public T set(int index, T element) {
 250         throw new UnsupportedOperationException();
 251     }
 
 252 
 253     @Override
 254     public int size() {
 255         return list.size();
 
 256     }
 257 
 258     @Override
 259     public List<T> subList(int fromIndex, int toIndex) {
 
 260         List<T> tlist = new ArrayList<T>();
 261         List<Pair<Long, T>> plist = list.subList(fromIndex, toIndex);
 
 262         for (Pair<Long, T> e : plist) {
 263             tlist.add(e.getSecond());
 264         }
 265         return tlist;
 
 266     }
 267 
 268     @Override
 269     public Object[] toArray() {
 270         return subList(0, list.size()).toArray();
 
 271     }
 272 
 273     @SuppressWarnings("hiding")
 274     @Override
 275     public <T> T[] toArray(T[] a) {
 
 276         return subList(0, list.size()).toArray(a);
 277     }
 278 
 279     @Override
 280     public String toString() {
 
 281         StringBuilder sb = new StringBuilder();
 282         Iterator<T> it = iterator();
 283         while (it.hasNext()) {
 
 284             sb.append(it.next() + ", ");
 285         }
 286         return sb.toString();
 287     }
 288 
 
 289     private void removeOldest() {
 290         timestamps.remove(timestamps.size() - 1);
 291     }
 292 
 293     private ListIterator<T> buildListIterator(
 
 294             final ListIterator<Pair<Long, T>> it) {
 295         return new ListIterator<T>() {
 
 296 
 297             @Override
 298             public void add(T e) {
 299                 it.add(new Pair<Long, T>(System.nanoTime(), e));
 
 300             }
 301 
 302             @Override
 303             public boolean hasNext() {
 304                 return it.hasNext();
 
 305 
 306             }
 307 
 308             @Override
 309             public boolean hasPrevious() {
 
 310                 return it.hasPrevious();
 311 
 312             }
 313 
 314             @Override
 315             public T next() {
 
 316                 Pair<Long, T> e = it.next();
 317                 return e.getSecond();
 318             }
 319 
 320             @Override
 
 321             public int nextIndex() {
 322                 return it.nextIndex();
 323 
 324             }
 
 325 
 326             @Override
 327             public T previous() {
 328                 Pair<Long, T> e = it.previous();
 
 329                 return e.getSecond();
 330             }
 331 
 332             @Override
 333             public int previousIndex() {
 
 334                 return it.previousIndex();
 335 
 336             }
 337 
 338             @Override
 339             public void remove() {
 
 340                 it.remove();
 341 
 342             }
 343 
 344             @Override
 345             public void set(T e) {
 
 346                 it.set(new Pair<Long, T>(System.nanoTime(), e));
 347 
 348             }
 349         };
 350     }
 
 351 
 352 }
 353 
 354 
 

Listing of LRUSortedList.java

   1 package com.plexobject.jmx.impl;
 
   2 
   3 import java.beans.PropertyChangeListener;
   4 import java.beans.PropertyChangeSupport;
   5 import java.util.Map;
 
   6 import java.util.concurrent.ConcurrentHashMap;
   7 import java.util.concurrent.atomic.AtomicLong;
   8 
   9 import javax.management.AttributeChangeNotification;
 
  10 import javax.management.MBeanNotificationInfo;
  11 import javax.management.Notification;
  12 import javax.management.NotificationBroadcasterSupport;
 
  13 import javax.management.NotificationListener;
  14 
  15 import org.apache.commons.lang.builder.EqualsBuilder;
  16 import org.apache.commons.lang.builder.HashCodeBuilder;
 
  17 import org.apache.commons.lang.builder.ToStringBuilder;
  18 import org.apache.log4j.Logger;
  19 
  20 import com.plexobject.jmx.ServiceJMXBean;
 
  21 import com.plexobject.metrics.Metric;
  22 import com.plexobject.util.TimeUtils;
  23 
  24 public class ServiceJMXBeanImpl extends NotificationBroadcasterSupport
 
  25         implements ServiceJMXBean, NotificationListener {
  26     private static final Logger LOGGER = Logger
  27             .getLogger(ServiceJMXBeanImpl.class);
 
  28     private Map<String, String> properties = new ConcurrentHashMap<String, String>();
  29     private final PropertyChangeSupport pcs = new PropertyChangeSupport(this);
 
  30 
  31     private final String serviceName;
  32     private AtomicLong totalErrors;
 
  33     private AtomicLong totalRequests;
  34 
  35     private AtomicLong sequenceNumber;
  36     private String state;
 
  37 
  38     public ServiceJMXBeanImpl(final String serviceName) {
  39         this.serviceName = serviceName;
 
  40         this.totalErrors = new AtomicLong();
  41         this.totalRequests = new AtomicLong();
  42         this.sequenceNumber = new AtomicLong();
 
  43     }
  44 
  45     @Override
  46     public double getAverageElapsedTimeInNanoSecs() {
 
  47         return Metric.getMetric(getServiceName())
  48                 .getAverageDurationInNanoSecs();
  49     }
  50 
 
  51     public String getProperty(final String name) {
  52         return properties.get(name);
  53     }
 
  54 
  55     public void setProperty(final String name, final String value) {
 
  56         final String oldValue = properties.put(name, value);
  57         final Notification notification = new AttributeChangeNotification(this,
 
  58                 sequenceNumber.incrementAndGet(), TimeUtils
  59                         .getCurrentTimeMillis(), name + " changed", name,
  60                 "String", oldValue, value);
  61         sendNotification(notification);
 
  62         handleNotification(notification, null);
  63     }
  64 
  65     @Override
 
  66     public String getServiceName() {
  67         return serviceName;
  68     }
  69 
 
  70     @Override
  71     public long getTotalDurationInNanoSecs() {
  72         return Metric.getMetric(getServiceName()).getTotalDurationInNanoSecs();
 
  73     }
  74 
  75     @Override
  76     public long getTotalErrors() {
 
  77         return totalErrors.get();
  78     }
  79 
  80     public void incrementError() {
 
  81         final long oldErrors = totalErrors.getAndIncrement();
  82         final Notification notification = new AttributeChangeNotification(this,
 
  83                 sequenceNumber.incrementAndGet(), TimeUtils
  84                         .getCurrentTimeMillis(), "Errors changed", "Errors",
  85                 "long", oldErrors, oldErrors + 1);
 
  86         sendNotification(notification);
  87     }
  88 
  89     @Override
  90     public long getTotalRequests() {
 
  91         return totalRequests.get();
  92     }
  93 
  94     public void incrementRequests() {
 
  95         final long oldRequests = totalRequests.getAndIncrement();
  96         final Notification notification = new AttributeChangeNotification(this,
 
  97                 sequenceNumber.incrementAndGet(), TimeUtils
  98                         .getCurrentTimeMillis(), "Requests changed",
  99                 "Requests", "long", oldRequests, oldRequests + 1);
 
 100         sendNotification(notification);
 101     }
 102 
 103     @Override
 104     public MBeanNotificationInfo[] getNotificationInfo() {
 105         String[] types = new String[] { AttributeChangeNotification.ATTRIBUTE_CHANGE };
 
 106         String name = AttributeChangeNotification.class.getName();
 107         String description = "An attribute of this MBean has changed";
 108         MBeanNotificationInfo info = new MBeanNotificationInfo(types, name,
 
 109                 description);
 110 
 111         return new MBeanNotificationInfo[] { info };
 112     }
 113 
 
 114     @Override
 115     public String getState() {
 116         return state;
 117     }
 118 
 
 119     /**
 120      * @param state
 121      *            the state to set
 
 122      */
 123     public void setState(String state) {
 124         this.state = state;
 125     }
 
 126 
 127     /**
 128      * @see java.lang.Object#equals(Object)
 
 129      */
 130     @Override
 131     public boolean equals(Object object) {
 132         if (!(object instanceof ServiceJMXBeanImpl)) {
 
 133             return false;
 134         }
 135         ServiceJMXBeanImpl rhs = (ServiceJMXBeanImpl) object;
 136         return new EqualsBuilder().append(this.serviceName, rhs.serviceName)
 
 137                 .isEquals();
 138     }
 139 
 140     /**
 141      * @see java.lang.Object#hashCode()
 
 142      */
 143     @Override
 144     public int hashCode() {
 145         return new HashCodeBuilder(786529047, 1924536713).append(
 
 146                 this.serviceName).toHashCode();
 147     }
 148 
 149     /**
 150      * @see java.lang.Object#toString()
 
 151      */
 152     @Override
 153     public String toString() {
 154         return new ToStringBuilder(this)
 
 155                 .append("serviceName", this.serviceName).append("totalErrors",
 156                         this.totalErrors).append("totalRequests",
 157                         this.totalRequests).append("totalRequests",
 
 158                         this.totalRequests).append("state", this.state).append(
 159                         "properties", this.properties).toString();
 160     }
 
 161 
 162     public void addPropertyChangeListener(PropertyChangeListener pcl) {
 163         pcs.addPropertyChangeListener(pcl);
 164     }
 165 
 
 166     public void removePropertyChangeListener(PropertyChangeListener pcl) {
 167         pcs.removePropertyChangeListener(pcl);
 168 
 169     }
 170 
 
 171     @Override
 172     public void handleNotification(Notification notification, Object handback) {
 173         LOGGER.info("Received notification: ClassName: "
 174                 + notification.getClass().getName() + ", Source: "
 
 175                 + notification.getSource() + ", Type: "
 176                 + notification.getType() + ", tMessage: "
 177                 + notification.getMessage());
 178         if (notification instanceof AttributeChangeNotification) {
 
 179             AttributeChangeNotification acn = (AttributeChangeNotification) notification;
 180             pcs.firePropertyChange(acn.getAttributeName(), acn.getOldValue(),
 181                     acn.getNewValue());
 182 
 183         }
 184     }
 
 185 }
 186 
 187 
 

Testing

Finally, here is how you can test this filter:

  1 package com.plexobject;
 
  2 
  3 import java.net.InetAddress;
  4 import java.util.Date;
  5 
 
  6 import org.apache.log4j.Logger;
  7 import org.apache.log4j.PatternLayout;
  8 import org.apache.log4j.net.SMTPAppender;
 
  9 
 10 import com.plexobject.log.FilteredSMTPAppender;
 11 
 12 public class Main {
 
 13     private static final Logger LOGGER = Logger.getLogger(Main.class);
 14     public static void main(String[] args) {
 
 15         SMTPAppender appender = new FilteredSMTPAppender();
 16         try {
 17             appender.setTo("bhatti@xxx.com");
 18             appender.setFrom("bhatti@xxx.com");
 
 19             appender.setSMTPHost("smtp.xxx.net");
 20             appender.setLocationInfo(true);
 21             appender.setSubject("Error from " + InetAddress.getLocalHost());
 22 
 
 23             appender.setLayout(new PatternLayout());
 24             appender.activateOptions();
 25             LOGGER.addAppender(appender);
 26         } catch (Exception e) {
 
 27             LOGGER.error("Failed to register smtp appender", e);
 28         }
 29         while (true) {
 30             try {
 
 31                 throw new Exception("throwing exception at " + new Date());
 32             } catch (Exception e) {
 
 33                 LOGGER.error("Logging error at " + new Date(), e);
 34             }
 35             try {
 
 36                 Thread.sleep(1000);
 37             } catch (InterruptedException e) {
 38                 Thread.interrupted();
 39             }
 40         }
 
 41     }
 42 }
 43 
 44 
 

Above code simulates error generation every second, but it sends email based on the throttling level defined in the configuration. Obviously you can use log4j properties file to define all this configuration, e.g.

<!– Send email when error happens –>
<appender name=”APP-EMAIL” class=”com.plexobject.log.FilteredSMTPAppender”>
<param name=”BufferSize” value=”256″ />
<param name=”SMTPHost” value=”smtp.xxx.net” />
<param name=”From” value=”bhatti@xxx.com” />
<param name=”To” value=”bhatti@xxx.com” />
<param name=”Subject” value=”Production Error” />
<layout class=”org.apache.log4j.PatternLayout”>
<param name=”ConversionPattern”
value=”[%d{ISO8601}]%n%n%-5p%n%n%c%n%n%m%n%n” />
</layout>

<filter class=”org.apache.log4j.varia.StringMatchFilter”>
<param name=”StringToMatch” value=”My Error”/>
<param name=”AcceptOnMatch” value=”false” />
</filter>
</appender>

Summary

I am skipping other classes, but you can download entire code from FilteredSMTPAppender.zip. This solution seems to be working from me but feel free to share your experience with similar problems.

February 3, 2010

A few recipes for reprocessing messages in Dead-Letter-Queue using ActiveMQ

Filed under: Computing — admin @ 2:42 pm

Messaging based asynchronous processing is a key component of any complexed software especially in transactional environment. There are a number of solutions that provide high performance and reliable messaging in Java space such as ActiveMQ, FUSE broker, JBossMQ, SonicMQ, Weblogic, Websphere, Fiorano, etc. These providers support JMS specification, which provides abstraction for queues, message providers and message consumers. In this blog, I will go over some recipes for recovering messages from dead letter queue when using ActiveMQ.

What is Dead Letter Queue

Generally, when a consumer fails to process a message within a transaction or does not send acknowledgement back to the broker, the message is put back to the queue. The message is then delivered upto certain number of times based on configuration and finally the message is put to dead letter queue when that limit is exceeded. The ActiveMQ documentation recommends following settings for defining dead letter queues:

 <broker...>
   <destinationPolicy>
     <policyMap>
       <policyEntries>
         <!-- Set the following policy on all queues using the '>' wildcard -->
         <policyEntry queue=">">
           <deadLetterStrategy>
             <individualDeadLetterStrategy
               queuePrefix="DLQ." useQueueForQueueMessages="true" />
           </deadLetterStrategy>
         </policyEntry>
       </policyEntries>
     </policyMap>
   </destinationPolicy>
   ...
 </broker>
 
 

and you can control redlivery policy as follows:

 RedeliveryPolicy policy = connection.getRedeliveryPolicy();
 policy.setInitialRedeliveryDelay(500);
 policy.setBackOffMultiplier(2);
 policy.setUseExponentialBackOff(true);
 policy.setMaximumRedeliveries(2);
 

It is important that you create dlq per queue, otherwise ActiveMQ puts them into a single dead letter queue.

Getting the QueueViewMBean Handle

ActiveMQ provides QueueViewMBean to invoke administration APIs on the queues. The easiest way to get this handle is to use BrokerFacadeSupport class, which is extended by RemoteJMXBrokerFacade and LocalBrokerFacade. You can use RemoteJMXBrokerFacade if you are connecting to remote ActiveMQ server, e.g. here is Spring configuration for setting it up:

     <bean id="brokerQuery" class="org.apache.activemq.web.RemoteJMXBrokerFacade" autowire="constructor" destroy-method="shutdown">
             <property name="configuration">
             <bean class="org.apache.activemq.web.config.SystemPropertiesConfiguration"/>
         </property>
             <property name="brokerName"><null/></property>
     </bean>
 

Alternatively, you can use LocalBrokerFacade if you are running embedded ActiveMQ server, e.g. below is Spring configuration for it:

     <bean id="brokerQuery" class="org.apache.activemq.web.LocalBrokerFacade" autowire="constructor" scope="prototype"/>
 

Getting number of messages from the queue

Once you got handle to QueueViewMBean, you can use following API to find the number of messages in the queue:

 1     public long getQueueSize(final String dest) {
 
 2         try {
 3             return brokerQuery.getQueue(dest).getQueueSize();
 4         } catch (Exception e) {
 5             throw new RuntimeException(e);
 
 6         }
 7     }
 8 
 

Copying Messages using JMS APIs

The JMS specification provides APIs to browse queue in read mode and then you can send the messages to another queue, e.g.

  1 import java.util.Enumeration;
 
  2 
  3 import javax.jms.Connection;
  4 import javax.jms.ConnectionFactory;
  5 import javax.jms.JMSException;
 
  6 import javax.jms.Message;
  7 import javax.jms.Queue;
  8 import javax.jms.QueueBrowser;
 
  9 import javax.jms.Session;
 10 import javax.jms.TextMessage;
 11 import javax.management.openmbean.CompositeData;
 12 
 
 13 import org.apache.activemq.broker.jmx.QueueViewMBean;
 14 import org.apache.activemq.web.BrokerFacadeSupport;
 15 import org.springframework.beans.factory.annotation.Autowired;
 16 import org.springframework.jms.core.BrowserCallback;
 
 17 import org.springframework.jms.core.JmsTemplate;
 18 import org.springframework.jms.core.MessageCreator;
 19 
 20 public class DlqReprocessor {
 
 21     @Autowired
 22     private JmsTemplate jmsTemplate;
 23 
 24     @Autowired
 25     BrokerFacadeSupport brokerQuery;
 26 
 
 27     @Autowired
 28     ConnectionFactory connectionFactory;
 29 
 30 
 31     @SuppressWarnings("unchecked")
 32     void redeliverDLQUsingJms(final String brokerName, final String from,
 
 33             final String to) {
 34         Connection connection = null;
 35         Session session = null;
 36 
 
 37         try {
 38             connection = connectionFactory.createConnection();
 39             connection.start();
 40             session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
 41             Queue dlq = session.createQueue(from);
 
 42             QueueBrowser browser = session.createBrowser(dlq);
 43 
 44             Enumeration<Message> e = browser.getEnumeration();
 45 
 46             while (e.hasMoreElements()) {
 
 47                 Message message = e.nextElement();
 48                 final String messageBody = ((TextMessage) message).getText();
 49                 jmsTemplate.send(to, new MessageCreator() {
 50                     @Override
 
 51                     public Message createMessage(final Session session)
 52                             throws JMSException {
 53                         return session.createTextMessage(messageBody);
 
 54                     }
 55                 });
 56             }
 57         } catch (Exception e) {
 58             throw new RuntimeException(e);
 
 59         } finally {
 60             try {
 61                 session.close();
 62             } catch (Exception e) {
 
 63             }
 64             try {
 65                 connection.close();
 66             } catch (Exception e) {
 
 67             }
 68         }
 69     }
 70     // . . .
 71 }
 72 
 
 

The downside of above approach is that it leaves the original messages in the dead letter queue.

Copying Messages using Spring’s JmsTemplate APIs

You can effectively do the same thing with JmsTemplate provided by Spring with a bit less code, e.g.

  1    void redeliverDLQUsingJmsTemplateBrowse(final String from, final String to) {
 
  2         try {
  3             jmsTemplate.browse(from, new BrowserCallback() {
  4 
  5                 @SuppressWarnings("unchecked")
 
  6                 @Override
  7                 public Object doInJms(Session session, QueueBrowser browser)
  8                         throws JMSException {
  9                     Enumeration<Message> e = browser.getEnumeration();
 
 10                     while (e.hasMoreElements()) {
 11                         Message message = e.nextElement();
 12                         final String messageBody = ((TextMessage) message)
 13                                 .getText();
 14                         jmsTemplate.send(to, new MessageCreator() {
 
 15                             @Override
 16                             public Message createMessage(final Session session)
 17                                     throws JMSException {
 18                                 return session.createTextMessage(messageBody);
 
 19                             }
 20                         });
 21                     }
 22                     return null;
 23                 }
 
 24             });
 25         } catch (Exception e) {
 26             throw new RuntimeException(e);
 27         }
 
 28     }
 29 
 

Moving Messages using receive/send APIs

As I mentioned, the above approaches leave messages in the DLQ, which may not be what you want. Thus, another simple approach would be to consume messages from the dead letter queue and send it to another,e.g.

  1   public void redeliverDLQUsingJmsTemplateReceive(final String from,
 
  2             final String to) {
  3         try {
  4             jmsTemplate.setReceiveTimeout(100);
  5             Message message = null;
 
  6             while ((message = jmsTemplate.receive(from)) != null) {
  7                 final String messageBody = ((TextMessage) message).getText();
  8                 jmsTemplate.send(to, new MessageCreator() {
 
  9                     @Override
 10                     public Message createMessage(final Session session)
 11                             throws JMSException {
 
 12                         return session.createTextMessage(messageBody);
 13                     }
 14                 });
 15             }
 16         } catch (Exception e) {
 
 17             throw new RuntimeException(e);
 18         }
 19     }
 20 
 

Moving Messages using ActiveMQ’s API

Finally, the best approach I found waas to use ActiveMQ’s APIs to move messags, e.g.

  1     public void redeliverDLQUsingJMX(final String brokerName, final String from,
 
  2             final String to) {
  3         try {
  4             final QueueViewMBean queue = brokerQuery.getQueue(from);
 
  5             for (int i = 0; i < 10 && queue.getQueueSize() > 0; i++) {
  6                 CompositeData[] compdatalist = queue.browse();
 
  7                 for (CompositeData cdata : compdatalist) {
  8                     String messageID = (String) cdata.get("JMSMessageID");
  9                     queue.moveMessageTo(messageID, to);
 10                 }
 
 11             }
 12         } catch (Exception e) {
 13             throw new RuntimeException(e);
 14         }
 
 15     }
 16 
 

I have been using this approach and have found to be reliable for reprocessing dead letter queue, though these techniques an also be used for general queues. I am sure there are tons of alternatives including using full-fledged enterprise service bus route. Let me know if you have interesting solutions to this problem.

January 20, 2010

PlexRBAC: an open source project for providing powerful role based security (II)

Filed under: Computing — admin @ 1:50 pm

This is continuation of my previous blog on my open source project PlexRBAC for managing role based access control. Last time, I covered REST APIs and in this blog I will cover internal domain model, RBAC APIs in Java and examples of instance or dynamic based security.

Layers

PlexRBAC consists of following layers

Business Domain Layer

This layer defines core classes that are part of the RBAC based security domain such as:

  • Domain – As described previously, the domain allows you to support multiple applications or realms.
  • Subject – The subject represents users who are defined in an application.
  • Role – A role represents job title or function.
  • Permission – A permission is composed of operation, target and an expression that is used for dynamic or instance based security.
  • SecurityError – Upon a permission failure, you can choose to store them in the database using SecurityError.

Repository Layer

This layer is responsible for accessing or storing above objects in the database. PlexRBAC uses Berkley DB for persistence and each domain is stored as a separate database, which allows you to segregate permissions and roles for distinct domains. Following are list of repositories supported by PlexRBAC:

  • DomainRepository – provides database access for Domains.
  • PermissionRepository – provides database access for Permissions.
  • SubjectRepository – provides database access for Subjects.
  • SecurityErrorRepository – provides database access for SecurityErrors.
  • RoleRepository – provides database access for Roles.
  • SecurityMappingRepository – provides APIs to map permissions with roles and to map subject with roles.
  • RepositoryFactory – provides factory methods to create above repositories.

Security Layer

This class defines PermissionManager for authorizing permissions.

Evaluation Layer

This layer proivdes evaluation engine for instance based security.

Service Layer

This layer defines REST services such as:

  • DomainService – this service provides REST APIs for accessing Domains.
  • PermissionService – this service provides REST APIs for accessing Permissions.
  • SubjectService – this service provides REST APIs for accessing Subjects.
  • RoleService – this service provides REST APIs for accessing Roles.
  • AuthenticationService – this service provides REST APIs for authenticating users.
  • AuthorizationService – this service provides REST APIs for authorizing permissions.
  • RolePermissionService – this service provides REST APIs for mapping permissions with roles.
  • SubjectRolesService – this service provides REST APIs for mapping subjects with roles.

JMX Layer

This layer defines JMX helper classes for managing services and configuration remotely.

Caching Layer

This layer provides caching security permissions to improve performance.

Metrics Layer

This layer provides performance measurement classes such as Timing class to measure method invocation benchmarks.

Utility Layer

This layer provides helper classes.

Web Layer

This layer provides filters for enforcing authentication and authorization when accessing REST APIs.

Example

Let’s use the same example that we described last time but with addition of instance based security. Let’s assume there are five roles: Teller, Customer-Service-Representative (CSR), Account, AccountingManager and LoanOfficer, where

  • A teller can modify customer deposit accounts — but only if customer and teller live in same region
  • A customer service representative can create or delete customer deposit accounts — but only if customer and teller live in same region
  • An accountant can create general ledger reports — but only if year is == current year
  • An accounting manager can modify ledger-posting rules — but only if year is == current year
  • A loan officer can create and modify loan accounts – but only if account balance is < 10000

In addition, following classes will be used to add domain specific security:

  1 
  2 class User {
 
  3 
  4     private String id;
  5     private String region;
  6 
 
  7     User() {
  8     }
  9 
 10     public User(String id, String region) {
 11         this.id = id;
 
 12         this.region = region;
 13     }
 14 
 15     public void setRegion(String region) {
 16         this.region = region;
 
 17     }
 18 
 19     public String getRegion() {
 20         return region;
 21     }
 
 22 
 23     public void setId(String id) {
 24         this.id = id;
 25     }
 26 
 
 27     public String getId() {
 28         return id;
 29     }
 30 }
 31 
 
 32 class Customer extends User {
 33 
 34     public Customer(String id, String region) {
 35         super(id, region);
 
 36     }
 37 }
 38 
 39 class Employee extends User {
 40 
 
 41     public Employee(String id, String region) {
 42         super(id, region);
 43     }
 44 }
 45 
 
 46 class Account {
 47 
 48     private String id;
 49     private double balance;
 
 50 
 51     Account() {
 52     }
 53 
 54     public Account(String id, double balance) {
 
 55         this.id = id;
 56         this.balance = balance;
 57     }
 58 
 59     /**
 
 60      * @return the id
 61      */
 62     public String getId() {
 
 63         return id;
 64     }
 65 
 66     /**
 67      * @param id
 
 68      *            the id to set
 69      */
 70     public void setId(String id) {
 
 71         this.id = id;
 72     }
 73 
 74     public void setBalance(double balance) {
 
 75         this.balance = balance;
 76     }
 77 
 78     public double getBalance() {
 79         return balance;
 
 80     }
 81 }
 82 
 83 
 

Bootstrapping

Let’s create handle to repository-factory as:

 1 
 2     private static final String TEST_DB_DIR = "test_db_dir_perms";
 
 3     RepositoryFactory repositoryFactory = new RepositoryFactoryImpl(TEST_DB_DIR);
 

And instance of permission manager as:

 1 PermissionManager permissionManager = new PermissionManagerImpl(repositoryFactory,
 
 2             new JavascriptEvaluator());
 

Creating a domain

Now, let’s create a domain for banking:

 1     private static final String BANKING = "banking";
 
 2     repositoryFactory.getDomainRepository().save(new Domain(BANKING, ""));
 

Creating Users

Next step is to create users for the domain or application so let’s define accounts for tom, cassy, ali, mike and larry, i.e.,

 1         final SubjectRepository subjectRepo = repositoryFactory
 
 2                 .getSubjectRepository(BANKING);
 3         Subject tom = subjectRepo.save(new Subject("tom", "pass"));
 4         Subject cassy = subjectRepo.save(new Subject("cassy", "pass"));
 
 5         Subject ali = subjectRepo.save(new Subject("ali", "pass"));
 6         Subject mike = subjectRepo.save(new Subject("mike", "pass"));
 
 7         Subject larry = subjectRepo.save(new Subject("larry", "pass"));
 8 
 

Creating Roles

Now, we will create roles for Teller, CSR, Accountant, AccountManager and LoanManager:

  1         final RoleRepository roleRepo = repositoryFactory
 
  2                 .getRoleRepository(BANKING);
  3         Role employee = roleRepo.save(new Role("Employee"));
  4         Role teller = roleRepo.save(new Role("Teller", employee));
 
  5         Role csr = roleRepo.save(new Role("CSR", teller));
  6         Role accountant = roleRepo.save(new Role("Accountant", employee));
 
  7         Role accountantMgr = roleRepo.save(new Role("AccountingManager",
  8                 accountant));
  9         Role loanOfficer = roleRepo
 
 10                 .save(new Role("LoanOfficer", accountantMgr));
 11 
 

Creating Permissions

We can then create new permissions and save them in the database as follows:

  1         final PermissionRepository permRepo = repositoryFactory
 
  2                 .getPermissionRepository(BANKING);
  3         Permission cdDeposit = permRepo.save(new Permission("(create|delete)",
  4                 "DepositAccount",
 
  5                 "employee.getRegion().equals(customer.getRegion())")); // 1
  6         Permission ruDeposit = permRepo.save(new Permission("(read|modify)",
  7                 "DepositAccount",
 
  8                 "employee.getRegion().equals(customer.getRegion())")); // 2
  9         Permission cdLoan = permRepo.save(new Permission("(create|delete)",
 10                 "LoanAccount", "account.getBalance() < 10000")); // 3
 
 11         Permission ruLoan = permRepo.save(new Permission("(read|modify)",
 12                 "LoanAccount", "account.getBalance() < 10000")); // 4
 
 13 
 14         Permission rdLedger = permRepo.save(new Permission("(read|create)",
 15                 "GeneralLedger", "year == new Date().getFullYear()")); // 5
 
 16 
 17         Permission rGlpr = permRepo
 18                 .save(new Permission("read", "GeneralLedgerPostingRules",
 19                         "year == new Date().getFullYear()")); // 6
 
 20 
 21         Permission cmdGlpr = permRepo.save(new Permission(
 22                 "(create|modify|delete)", "GeneralLedgerPostingRules",
 23                 "year == new Date().getFullYear()")); // 7
 
 24 
 

Mapping Subjects/Permissions to Roles

Now we will map subjects to roles as follows:

 1         final SecurityMappingRepository smr = repositoryFactory
 
 2                 .getSecurityMappingRepository(BANKING);
 3 
 4         // Mapping Users to Roles
 5         smr.addRolesToSubject(tom, teller);
 6         smr.addRolesToSubject(cassy, csr);
 7         smr.addRolesToSubject(ali, accountant);
 
 8         smr.addRolesToSubject(mike, accountantMgr);
 9         smr.addRolesToSubject(larry, loanOfficer);
 0 
 

Then we will map permissions to roles as follows:

 1         smr.addPermissionsToRole(teller, ruDeposit);
 2         smr.addPermissionsToRole(csr, cdDeposit);
 
 3         smr.addPermissionsToRole(accountant, rdLedger);
 4         smr.addPermissionsToRole(accountant, ruLoan);
 5         smr.addPermissionsToRole(accountantMgr, cdLoan);
 6         smr.addPermissionsToRole(accountantMgr, rGlpr);
 7         smr.addPermissionsToRole(loanOfficer, cmdGlpr);
 8 
 
 

Authorization

Now the fun part of authorization, let’s check if user “tom” can view deposit-accounts, e.g.

  1    public static Map<String, Object> toMap(final Object... keyValues) {
 
  2         Map<String, Object> map = new HashMap<String, Object>();
  3         for (int i = 0; i < keyValues.length - 1; i += 2) {
 
  4             map.put(keyValues[i].toString(), keyValues[i + 1]);
  5         }
  6         return map;
  7     }
 
  8     @Test
  9     public void testReadDepositByTeller() {
 10         initDatabase();
 11         permissionManager.check(new PermissionRequest(BANKING, "tom", "read",
 
 12                 "DepositAccount", toMap("employee", new Employee("tom",
 13                         "west"), "customer", new Customer("zak", "west"))));
 
 14     }
 15 
 16 
 

Note that above test method builds a PermissionRequest that encapsulates domain, subject, operation, target and context and then calls check method of SecurityManager, which throws SecurityException if permission fails.

Then we check if tom, the teller can delete deposit-account, e.g.

 1     @Test(expected = SecurityException.class)
 
 2     public void testDeleteByTeller() {
 3         initDatabase();
 4         permissionManager.check(new PermissionRequest(BANKING, "tom", "delete",
 
 5                 "DepositAccount", toMap("employee", new Employee("tom",
 6                         "west"), "customer", new Customer("zak", "west"))));
 
 7     }
 8 
 

Which would throw security exception.

Now let’s check if cassy, the CSR can delete deposit-account, e.g.

 1     @Test
 2     public void testDeleteByCsr() {
 
 3         initDatabase();
 4         permissionManager.check(new PermissionRequest(BANKING, "cassy",
 5                 "delete", "DepositAccount", toMap("employee",
 
 6                         new Employee("cassy", "west"), "customer",
 7                         new Customer("zak", "west"))));
 
 0 
 

Which works as CSR have permissions for deleting deposit-account. Now, let’s check if ali, the accountant can view general-ledger, e.g.

 1    @Test
 2     public void testReadLedgerByAccountant() {
 
 3         initDatabase();
 4         permissionManager.check(new PermissionRequest(BANKING, "ali", "read",
 5                 "GeneralLedger", toMap("year", 2010, "account",
 
 6                         new Account("zak", 500))));
 7     }
 8 
 9 
 

Which works as expected. Next we check if ali can delete general-ledger:

 1     @Test(expected = SecurityException.class)
 
 2     public void testDeleteLedgerByAccountant() {
 3         initDatabase();
 4         permissionManager.check(new PermissionRequest(BANKING, "ali", "delete",
 
 5                 "GeneralLedger", toMap("year", 2010, "account",
 6                         new Account("zak", 500))));
 7     }
 
 8 
 9 
 

Which would fail as only account-manager can delete. Next we check if mike, the account-manager can create general-ledger, e.g.

 1     @Test
 2     public void testCreateLedgerByAccountantManager() {
 
 3         initDatabase();
 4         permissionManager.check(new PermissionRequest(BANKING, "mike",
 5                 "create", "GeneralLedger", toMap("year", 2010,
 
 6                         "account", new Account("zak", 500))));
 7     }
 8 
 

Which works as expected. Now we check if mike can create posting-rules of general-ledger, e.g.

 1     @Test(expected = SecurityException.class)
 
 2     public void testPostLedgingRulesByAccountantManager() {
 3         initDatabase();
 4         permissionManager.check(new PermissionRequest(BANKING, "mike",
 
 5                 "create", "GeneralLedgerPostingRules", toMap("year",
 6                         2010, "account", new Account("zak", 500))));
 
 7     }
 8 
 

Which fails authorization. Then we check if larry, the loan officer can create posting-rules of general-ledger, e.g.

 1     @Test
 2     public void testPostLedgingRulesByLoanManager() {
 
 3         initDatabase();
 4         permissionManager.check(new PermissionRequest(BANKING, "larry",
 5                 "create", "GeneralLedgerPostingRules", toMap("year",
 
 6                         2010, "account", new Account("zak", 500))));
 7     }
 8 
 

Which works as expected. Now, let’s check the same permission but with different year, e.g.

 1     @Test(expected = SecurityException.class)
 
 2     public void testPostLedgingRulesByLoanManagerWithExceededAmount() {
 3         initDatabase();
 4         permissionManager.check(new PermissionRequest(BANKING, "larry",
 
 5                 "create", "GeneralLedgerPostingRules", IDUtils.toMap("year",
 6                         2011)));
 7     }
 8 
 

Which fails as year doesn’t match.

Summary

Above examples demonstrate how PlexRBAC API can be used along with instance or dynamic based security. In next post, I will describe caching and how PlexRBAC can be integrated with J2EE and Spring security.

January 10, 2010

PlexRBAC: an open source project for providing powerful role based security (I)

Filed under: Computing — admin @ 7:45 pm

Overview

In my last blog I described core pieces of a security system and mentioned a new open source project PlexRBAC I recently started to provide Role Based Security both as a REST service and Java library. In this post, I will go over the some of the features that are now available. This project is based on my experience with a number of home built solutions for RBAC and standard J2EE solutions. However, a key differentiator is that it adds instance based security or context based security that adds dynamic access control. The role based security consists of following components:

Domain

Though, domain is strictly not part of role based security but RBAC provides segregation of security policies by domains, where a domain can represent a security realm or an application.

Subject

The subject represents users who are defined in an application.

Role

A role represents job title or function. A subject or user belongs to one or more roles. One of key feature of PlexRBAC is that roles support inheritance where a role can have one or more roles. This helps define security policies that follow “don’t repeat yourself” or DRY.

Permission

A permission consists of two sub parts: operation and target, where operation is a “verb” that describes action and target represents “object” that is acted upon. All permissions are assigned to roles. In PlexRBAC, permissions also contains an expression which is evaluated to check dynamic security. PlexRBAC allows Javascript based expressions and provides access to runtime request parameters. Finally, PlexRBAC offers regular expressions for both operations and target, so you can define operations like “(read|write|create|delete)” or “read*”, etc.

Following diagram shows the relationship between these components:

Getting Started

PlexRBAC depends on Java 1.6+ and Maven 2.0+. You can download the project using git:

 git clone git@github.com:bhatti/PlexRBAC.git
 

Then you can start the REST based web service within Jetty by typing:

 mvn jetty:run-war
 

The service will listen on port 8080 and you can test it with curl.

Authentication

Though, PlexRBAC is not designed for authentication but it provides Basic authentication and all administration APIs are protected with the authentication. By default, it uses an account “super_admin” with password “changeme”, which you can modify with configurations. Also, as PlexRBAC supports domains to segregates security policies, subjects are also restricted to the domains where they are defined.

REST APIs

Following are APIs defined in PlexRBAC:

Domains

  • GET /api/security/domains – returns list of all domains in JSON format.
  • GET /api/security/domains/{domain-id} – returns details of given domain in JSON format.
  • PUT /api/security/domains/{domain-id} with body of domain details in JSON format.
  • DELETE /api/security/domains – deletes all domains.
  • DELETE /api/security/domains/{domain-id} – deletes domain identified by domain-id.

Subjects

  • GET /api/security/subjects/{domain-id} – returns list of all subjects in domain identified by domain-id in JSON format.
  • GET /api/security/subjects/{domain-id}/{id} – returns details of given subject identified by id in given domain.
  • PUT /api/security/subjects/{domain-id}/{id} with body of subject details in JSON format.
  • DELETE /api/security/subjects/{domain-id} – deletes all subjects in given domain.
  • DELETE /api/security/subjects/{domain-id}/{id} – deletes subject identified by id.

Roles

  • GET /api/security/roles/{domain-id} – returns list of all roles in domain identified by domain-id in JSON format.
  • GET /api/security/roles/{domain-id}/{id} – returns details of given role identified by id in given domain.
  • PUT /api/security/roles/{domain-id}/{id} with body of role details in JSON format.
  • DELETE /api/security/roles/{domain-id} – deletes all roles in given domain.
  • DELETE /api/security/roles/{domain-id}/{id} – deletes role identified by id.

Permissions

  • GET /api/security/permissions/{domain-id} – returns list of all permissions in domain identified by domain-id in JSON format.
  • GET /api/security/permissions/{domain-id}/{id} – returns details of given permission identified by id in given domain.
  • POST /api/security/permissions/{domain-id} with body of permission details in JSON format. Note that this API uses POST instead of PUT as the id will be assigned by the server.
  • DELETE /api/security/permissions/{domain-id} – deletes all permissions in given domain.
  • DELETE /api/security/permissions/{domain-id}/{id} – deletes permission identified by id.

Mapping of Roles and Permissions

  • PUT /api/security/role_perms/{domain-id}/{role-id} – adds permissions identified by permissionIds that stores list of permission-ids in JSON format. Note that permissionIds is passed as a form parameter.
  • DELETE /api/security/role_perms/{domain-id}/{role-id} – removes permissions identified by permissionIds that stores list of permission-ids in JSON format. Note that permissionIds is passed as a form parameter.

Mapping of Subjects and Roles

  • PUT /api/security/subject_roles/{domain-id}/{subject-id} – adds roles identified by rolenames that stores list of role-ids in JSON format. Note that rolenames is passed as a form parameter.
  • DELETE /api/security/subject_roles/{domain-id}/{subject-id} – removes roles identified by rolenames that stores list of role-ids in JSON format. Note that rolenames is passed as a form parameter.

Authorization

  • GET /api/security/authorize/{domain-id} – with query parameter of operation and target.

Example

Let’s start with a banking example where a bank-object can be account, general-ledger-report or ledger-posting-rules and account is further grouped into customer account or loan account, e.g.

Let’s assume there are five roles: Teller, Customer-Service-Representative (CSR), Account, AccountingManager and LoanOfficer, where

  • A teller can modify customer deposit accounts.
  • A customer service representative can create or delete customer deposit accounts.
  • An accountant can create general ledger reports.
  • An accounting manager can modify ledger-posting rules.
  • A loan officer can create and modify loan accounts.

Creating a domain

The first thing is to create a security domain for your application. As we are dealing with banking domain, let’s call our domain “banking”.

 curl -H "Content-Type: application/json" --user "default:super_admin:changeme" -X PUT "http://localhost:8080/api/security/domains/banking" -d '{"id":"banking"}'
 

It will return response:

 {"id":"banking","ownerSubjectNames":"super_admin"}
 

The first thing to note that we are passing user and password using Basic authentication as all accesses to administration APIs require login. Now, you can find out available domains via

 curl -v --user "super_admin:changeme" "http://localhost:8080/api/security/domains"
 

which would return something like:

 [{"id":"banking","ownerSubjectNames":"super_admin"},{"description":"default","id":"default","ownerSubjectNames":"super_admin"}]
 

Creating Users

Next step is to create users for the domain or application so let’s define accounts for tom, cassy, ali, mike and larry, i.e.,

 curl -H "Content-Type: application/json" --user "default:super_admin:changeme" -X PUT "http://localhost:8080/api/security/subjects/banking" -d '{"id":"tom","credentials":"pass"}'
 curl -H "Content-Type: application/json" --user "default:super_admin:changeme" -X PUT "http://localhost:8080/api/security/subjects/banking" -d '{"id":"cassy","credentials":"pass"}'
 curl -H "Content-Type: application/json" --user "default:super_admin:changeme" -X PUT "http://localhost:8080/api/security/subjects/banking" -d '{"id":"ali","credentials":"pass"}'
 curl -H "Content-Type: application/json" --user "default:super_admin:changeme" -X PUT "http://localhost:8080/api/security/subjects/banking" -d '{"id":"mike","credentials":"pass"}'
 curl -H "Content-Type: application/json" --user "default:super_admin:changeme" -X PUT "http://localhost:8080/api/security/subjects/banking" -d '{"id":"larry","credentials":"pass"}'
 

Note that each user is identified by an id or username and credentials and in above examples usernames or subject-ids are prefixed with domain-ids, e.g. “ddefault:super_admin”.

Creating Roles

As I mentioned, a role represents job title or responsibilities and each role can have one or more parents. By default, PlexRBAC defines an “anonymous” role, which is used for users who are not logged in and all user-defined roles extend “anonymous” role.

First, we create a role for bank employee called “Employee”:

 curl -H "Content-Type: application/json" --user "default:super_admin:changeme" -X PUT "http://localhost:8080/api/security/roles/banking" -d '{"id":"Employee"}'
 

which returns

 {"id":"Employee","parentIds":["anonymous"]}
 

As you can see the “Employee” role is created with parent of “anonymous”. Next, we create “Teller” role:

 curl -H "Content-Type: application/json" --user "default:super_admin:changeme" -X PUT "http://localhost:8080/api/security/roles/banking" -d '{"id":"Teller","parentIds":["Employee"]}'
 

which returns:

 {"id":"Teller","parentIds":["Employee"]}
 

Then we create a role for customer-service-representative called “CSR” that is extended by Teller e.g.

 curl -H "Content-Type: application/json" --user "default:super_admin:changeme" -X PUT "http://localhost:8080/api/security/roles/banking" -d '{"id":"CSR","parentIds":["Teller"]}' 
 

which returns:

 {"id":"CSR","parentIds":["Teller"]}
 

Then we create a role for “Accountant”:

 curl -H "Content-Type: application/json" --user "default:super_admin:changeme" -X PUT "http://localhost:8080/api/security/roles/banking" -d '{"id":"Accountant","parentIds":["Employee"]}' 
 

which returns:

 {"id":"Accountant","parentIds":["Employee"]}
 

Then we create a role for “AccountingManager”, which is extended by “Accountant”, e.g.

 curl -H "Content-Type: application/json" --user "default:super_admin:changeme" -X PUT "http://localhost:8080/api/security/roles/banking" -d '{"id":"AccountingManager","parentIds":["Accountant"]}' 
 

which returns:

 {"id":"AccountingManager","parentIds":["Accountant"]}
 

Finally, we create a role for “LoanOfficer”, e.g.

 curl -H "Content-Type: application/json" --user "default:super_admin:changeme" -X PUT "http://localhost:8080/api/security/roles/banking" -d '{"id":"LoanOfficer","parentIds":["Employee"]}' 
 

which returns:

 {"id":"LoanOfficer","parentIds":["Employee"]}
 

Creating Permissions

As described above, a permission is composed of operation, target and expression, where an operation and target can be any regular expression and expression can be any Javascript expression. However following permissions don’t define any expressions for simplicity. First, we create a permission to create or delete deposit-account, e.g.

 curl -H "Content-Type: application/json" --user "default:super_admin:changeme" -X POST "http://localhost:8080/api/security/permissions/banking" -d '{"operation":"(create|delete)","target":"DepositAccount","expression":""}' 
 

which returns:

 {"expression":"","id":"1","operation":"(create|delete)","target":"DepositAccount"}
 

Each permission is automatically assigned a unique numeric id. Next, we create a permission to read or modify deposit-account, e.g.

 curl -H "Content-Type: application/json" --user "default:super_admin:changeme" -X POST "http://localhost:8080/api/security/permissions/banking" -d '{"operation":"(read|modify)","target":"DepositAccount","expression":""}' 
 

which returns:

 {"expression":"","id":"2","operation":"(read|modify)","target":"DepositAccount"}
 

Then, we create a permission to create or delete loan-account

 curl -H "Content-Type: application/json" --user "default:super_admin:changeme" -X POST "http://localhost:8080/api/security/permissions/banking" -d '{"operation":"(create|delete)","target":"LoanAccount","expression":""}' 
 

which returns:

 {"expression":"","id":"3","operation":"(create|delete)","target":"LoanAccount"}
 

Then we create a permission to read or modify loan-account, e.g.

 curl -H "Content-Type: application/json" --user "default:super_admin:changeme" -X POST "http://localhost:8080/api/security/permissions/banking" -d '{"operation":"(read|modify)","target":"LoanAccount","expression":""}' 
 

which returns:

 {"expression":"","id":"4","operation":"(read|modify)","target":"LoanAccount"}
 

Then we create a role to view and create general-ledger, e.g.

 curl -H "Content-Type: application/json" --user "default:super_admin:changeme" -X POST "http://localhost:8080/api/security/permissions/banking" -d '{"operation":"(read|create)","target":"GeneralLedger","expression":""}' 
 

which returns:

 {"expression":"","id":"5","operation":"(read|create)","target":"GeneralLedger"}
 

Finally, we create a permission for modifying posting rules of general-ledger, e.g.

 curl -H "Content-Type: application/json" --user "default:super_admin:changeme" -X POST "http://localhost:8080/api/security/permissions/banking" -d '{"operation":"(read|create|modify|delete)","target":"GeneralLedgerPostingRules","expression":""}' 
 

which returns:

 {"expression":"","id":"6","operation":"(read|create|modify|delete)","target":"GeneralLedgerPostingRules"}
 

Mapping Permissions to Roles

Next task is to map permissions to roles. First we assign permission to view or modify customer deposit accounts to Teller role:

 curl -H "Content-Type: application/json" --user "default:super_admin:changeme" -X PUT "http://localhost:8080/api/security/role_perms/banking/Teller" -d 'permissionIds=["2"]'
 

which returns all permission-ids for given role, e.g.

 ["2"]
 

Then we assign permission to view, create, modify or delete customer deposit accounts to CSR (as CSR extends Teller it will automatically will get all permissions of Teller):

 curl -H "Content-Type: application/json" --user "default:super_admin:changeme" -X PUT "http://localhost:8080/api/security/role_perms/banking/CSR" -d 'permissionIds=["1"]'
 

Then we assign permissions to create general ledger to Accountant:

 curl -H "Content-Type: application/json" --user "default:super_admin:changeme" -X PUT "http://localhost:8080/api/security/role_perms/banking/Accountant" -d 'permissionIds=["5"]'
 

Then we assign permission to modify ledger-posting rules to AccountingManager:

 curl -H "Content-Type: application/json" --user "default:super_admin:changeme" -X PUT "http://localhost:8080/api/security/role_perms/banking/AccountingManager" -d 'permissionIds=["6"]' 
 

Mapping Users to Roles

A role is associated with one or more permissions and each user is assigned one or more role. First, we assign subject “tom” to Teller role:

 curl -H "Content-Type: application/json" --user "default:super_admin:changeme" -X PUT "http://localhost:8080/api/security/subject_roles/banking/tom" -d 'rolenames=["Teller"]'
 

which returns list of all roles for given subject or user, e.g.

 ["Teller"]
 

Then we assign subject “cassy” to CSR role:

 curl -H "Content-Type: application/json" --user "default:super_admin:changeme" -X PUT "http://localhost:8080/api/security/subject_roles/banking/cassy" -d 'rolenames=["CSR"]'
 

Next we assign subject “ali” to role of Accountant:

 curl -H "Content-Type: application/json" --user "default:super_admin:changeme" -X PUT "http://localhost:8080/api/security/subject_roles/banking/ali" -d 'rolenames=["Accountant"]'
 

Then we assign role AccountingManager to “mike”:

 curl -H "Content-Type: application/json" --user "default:super_admin:changeme" -X PUT "http://localhost:8080/api/security/subject_roles/banking/mike" -d 'rolenames=["AccountingManager"]'
 

Finally we assign subject “larry” to LoanOfficer role:

 curl -H "Content-Type: application/json" --user "default:super_admin:changeme" -X PUT "http://localhost:8080/api/security/subject_roles/banking/larry" -d 'rolenames=["LoanOfficer"]'
 

Authorization

Now we are ready to validate authorization based on above security policies. For example, let’s check if user “tom” can view deposit-accounts, e.g.

 curl -v --user "banking:tom:pass" "http://localhost:8080/api/authorize/banking?operation=read&target=DepositAccount"
 

On successful authorization, the API returns 200 http responose-code and on failure it returns 401 http response-code, e.g.

 < HTTP/1.1 200 OK
 

Then we check if tom, the teller can delete deposit-account, e.g.

 curl -v --user "banking:tom:pass" "http://localhost:8080/api/authorize/banking?operation=delete&target=DepositAccount"
 

which returns http-response-code 401, e.g.

 < HTTP/1.1 401 Unauthorized
 

Then we create if cassy, the CSR can delete deposit-account, e.g.

 curl -v --user "banking:cassy:pass" "http://localhost:8080/api/authorize/banking?operation=delete&target=DepositAccount"
 

which returns:

 < HTTP/1.1 200 OK
 

Then we check if ali, the accountant can view general-ledger, e.g.

 curl -v --user "banking:ali:pass" "http://localhost:8080/api/authorize/banking?operation=read&target=GeneralLedger"
 

which returns:

 < HTTP/1.1 200 OK
 

Next we check if mike, the accounting-manager can create general-ledger, e.g.

 curl -v --user "banking:mike:pass" "http://localhost:8080/api/authorize/banking?operation=create&target=GeneralLedger"
 

which returns:

 < HTTP/1.1 200 OK
 

Then we check if larry, the loan officer can create posting-rules of general-ledger, e.g.

 curl -v --user "banking:mike:pass" "http://localhost:8080/api/authorize/banking?operation=create&target=GeneralLedgerPostingRules"
 

which returns:

 < HTTP/1.1 200 OK
 

Next, ali tries to create posting rules via

 curl -v --user "banking:ali:pass" "http://localhost:8080/api/authorize/banking?operation=create&target=GeneralLedgerPostingRules"
 

which is denied:

 < HTTP/1.1 401 Unauthorized
 

Summary

Above examples demonstrate how PlexRBAC can be used to define and enforce flexible security policies. In next post, I will describe instance based security, regular expressions and Java APIs for PlexRBAC.

December 27, 2009

Building Security Systems

Filed under: Computing — admin @ 11:20 pm

Being software developer for over eighteen years, I have observed a number of recurring problems and one of those recurring problems is security system. Most systems you build will require some kind of security so in this post I will go over core concepts when adding security to your system.

User Registration

A pre-requisite for any security system is to allow users to register to the system and store those users in some database, LDAP, Active Directory, or storage system. Though, for an internal application this step may be unnecessary.

Authentication

The authentication allows systems to validate users based on password or other form of verification. For internal applications within a company, users may have to use multiple applications with their own authentication and each external website would also require unique authentication. This quickly becomes burdensome for both users and applications as users have to remember the passwords and systems have to maintain them. Thus, many companies employ some form of Single-Sign-On and I have used many solutions such as SiteMinder, IChain, Kerberos, Open SSO, Central Authentication Service (CAS), or other home built solutions. These Single-Sign-On systems use reverse proxy servers that sit in front of the application and intercepts all requests and automatically redirects users to login page if the users are not authenticated. When an internal system consists of multiple tiers such as services, it is often required to pass authentication tokens to those services. In J2EE systems, you can Common Secure Interoperability (CSIv2) protocol to pass the authentication to other tiers, which uses Security Attribute Service (SAS) protocol to perform client authentication and impersonation.

For external systems, Open ID is a way to go and I have used RPX to integrate Open ID for a number of sites I have developed such as http://wazil.com/, http://dealredhot.com/, etc.

There are a number of factors that make authentication a bit tricky such as when part of your system does not require authentication, you have to ensure the authentication policy is being used correctly. Also, in general authentication requires https instead of http, so you have to ensure that the site use those protocols consistently. In generaly, static contents such as css, javascript and images do not require authentication but often they are also put behind authentication by mistake.

Another factor related to authentication is session management. A session determines how long the user can access the system without login. Though, many systems provide remember-me feature, but often sessions require system resources on the server. It’s essential to keep the session short as it can effect scalability if it’s stored on the server. I generally prefer keeping the session very short and storing only user-id and a couple of other database-ids such as shopping-cart-id, request-id, etc. If they are short, they can also be stored in cookies that makes a stateless system so you can scale easily.

Authorization

Not all users are same in most systems, thus authorization allows you to provide access control to limit the usage based on permissions and access control. There are a number of ways to define authorization such as Access control list, Role-based access control, Capability-based security, etc. In most systems, I have used J2EE/EJB Security, Java Web Security, JAAS, Acegi, which is now part of Spring and home built systems. As security is a cross cutting concern, I prefer to define those declaratively in a common security file or with annotations. There is nothing worse than sporadic security code mixed with your business logic.

One of feature I have found lacked in most of open source and commercial tools is support for instance based security or dynamic security that verifies runtime properties. For example, in most RBAC systems you can define rule that a purchase order can be approved by a role “POApprover”, but it does not allow you to say that “POApprover” can only approve if the user is from the same department or if amount is less than $10,000, etc.

UI or Resource Protection

When users have various level of access, it is essential to hide the UI elements and resources that are not accessible. Though, I have seen some systems employ security by obscurity that only hide the resources without actually enforcing the permissions, but it’s a bad idea. This can be complicated when the access level is very fine grained such as when a single form has fields based on role and permissions.

Database Security

The security must be enforced in depth, ranging from the UI, business and database tier. The database operations must use security to prevent access to unauthorized data. For example, let’s assume a user can post and edit blogs, it is essential that the database only allows the user to modify his/her blog. Also, it is critical that any kind of sensitive data such as passwords or personal identification with encryption. This is another reason I like OpenId or SSO solution because you don’t need to maintain them.

Method/Message Security

The message security ensures that a user only invokes the operations that he/she is authorized. For example, Acegi provides an annotation based mechanism to protect unauthorized methods.

Data Integrity

Any communication based systems may need to use message authentication check (MAC) to detect changes to the data.

Confidentiality

Any communication based systems may need to encrypt sensitive data with HTTPS.

Non-repudiation

The system must audit users action so that they cannot repudiate them.

Summary

As achieving high level of security can be difficult and expensive so you need to treat security as a risk and employ the level of security that suits the underlying system. Finally, as I have found most RBAC systems lack, I have started my own open source project PlexRBAC to provide instance based security. Of course if you hare interested in assisting with the effort, you are welcome to join the project.

November 16, 2009

Applying Adaptive Object Model using dynamic languages and schema-less databases

Filed under: Java — admin @ 3:10 pm

Introduction to Adaptive/Active Object Model

Adaptive or Active Object Model is a design pattern used in domains that requires dynamic manipulation of meta information.
Though, it is quite extensive topic of research, but general idea from original paper of
Ralph Johnson is to treat meta information such as attributes,
rules and relationships as a data. It is usually used when the number of sub-classes is huge or unknown upfront and the system requires adding new functionality without downtime.
For example, let’s say we are working in automobile domain and we need to model different type of vehicles. Using an object oriented design would result in vehicle hierarchy such as follows:

In above example, all type hierarchy is predefined and each class within the hierarchy defines attributes and operations. Adaptive Object Modeling on the other hand use Object Type pattern, which treats classes like objects. The basic Adaptive Object Model uses type square model such as:

In above diagram, EntityType class represents all classes and instance of this class defines actual attributes and operations supported by the class. Similarly, PropertyType defines names and types of all attributes. Finally, instance of Entity class will actual be real object instance that would store collection of properties and would refer to the EntityType.

Java Implementation

Let’s assume we only need to model Vehicle class from above vehicle hierarchy. In a typical object oriented language such as Java, the Vehicle class would be defined as follows:

  1 /*
  2  * Simple Vehicle class
  3  * 
  4  */
 
  5 package com.plexobject.aom;
  6 
  7 import java.util.Date;
  8 
 
  9 public class Vehicle {
 10 
 11     private String maker;
 12     private String model;
 
 13     private Date yearCreated;
 14     private double speed;
 15     private long miles;
 
 16     //... other attributes, accessors, setters
 17 
 18     public void drive() {
 19         //
 
 20     }
 21 
 22     public void stop() {
 23         //
 
 24     }
 25 
 26     public void performMaintenance() {
 27         //
 28     }
 
 29     //... other methods
 30 }
 31 
 32 
 33 
 

As you can see all attributes and operations are defined within the Vehicle class. The Adaptive Object Model would use meta classes such as Entity, EntityType, Property and PropertyType to build the Vehicle metaclass. Following Java code defines core classes of type square model:

The Property class defines type and value for each attribute of class:

  1 /*
  2  * Property class defines attribute type and value
  3  * 
  4  */
 
  5 package com.plexobject.aom;
  6 
  7 public class Property {
 
  8 
  9     private PropertyType propertyType;
 10     private Object value;
 11 
 12     public Property(PropertyType propertyType, Object value) {
 
 13         this.propertyType = propertyType;
 14         this.value = value;
 15     }
 16 
 17     public PropertyType getPropertyType() {
 
 18         return propertyType;
 19     }
 20 
 21     public Object getValue() {
 22         return value;
 
 23     }
 24     //... other methods
 25 }
 26 
 27 
 

The PropertyType class defines type information for each attribute of class:

  1 /*
  2  * PropertyType class defines type information
  3  * 
  4  */
 
  5 package com.plexobject.aom;
 
  6 
  7 public class PropertyType {
  8 
  9     private String propertyName;
 
 10     private String type;
 11 
 12     public PropertyType(String propertyName, String type) {
 13         this.propertyName = propertyName;
 14         this.type = type;
 
 15     }
 16 
 17     public String getPropertyName() {
 18         return propertyName;
 19     }
 
 20 
 21     public String getType() {
 22         return type;
 23     }
 24     //... other methods
 
 25 }

The EntityType class defines type of entity:

  1 /*
  2  * EntityType class defines attribute types and operations
  3  * 
  4  */
  5 package com.plexobject.aom;
 
  6 
  7 import java.util.Collection;
  8 import java.util.HashMap;
  9 import java.util.Map;
 
 10 
 11 public class EntityType {
 12 
 13     private String typeName;
 14     private Map<String, PropertyType> propertyTypes = new HashMap<String, PropertyType>();
 
 15     private Map<String, Operation> operations = new HashMap<String, Operation>();
 16 
 17     public EntityType(String typeName) {
 
 18         this.typeName = typeName;
 19     }
 20 
 21     public String getTypeName() {
 22         return typeName;
 
 23     }
 24 
 25     public void addPropertyType(PropertyType propertyType) {
 26         propertyTypes.put(propertyType.getPropertyName(),
 27                 propertyType);
 
 28     }
 29 
 30     public Collection<PropertyType> getPropertyTypes() {
 31         return propertyTypes.values();
 
 32     }
 33 
 34     public PropertyType getPropertyType(String propertyName) {
 35         return propertyTypes.get(propertyName);
 36     }
 
 37 
 38     public void addOperation(String operationName, Operation operation) {
 39         operations.put(operationName, operation);
 40 
 41     }
 
 42 
 43     public Operation getOperation(String name) {
 44         return operations.get(name);
 45     }
 46 
 
 47     public Collection<Operation> getOperations() {
 48         return operations.values();
 49     }
 50     //... other methods
 
 51 }
 52 
 53 
 

The Entity class defines entity itself:

  1 /*
  2  * Entity class represents instance of actual metaclass
  3  * 
  4  */
  5 package com.plexobject.aom;
 
  6 
  7 import java.util.Collection;
  8 import java.util.Collections;
  9 
 
 10 public class Entity {
 11 
 12     private EntityType entityType;
 13     private Collection<Property> properties;
 
 14 
 15     public Entity(EntityType entityType) {
 16         this.entityType = entityType;
 17     }
 18 
 19     public EntityType getEntityType() {
 
 20         return entityType;
 21     }
 22 
 23     public void addProperty(Property property) {
 
 24         properties.add(property);
 25     }
 26 
 27     public Collection<Property> getProperties() {
 28         return Collections.unmodifiableCollection(properties);
 
 29     }
 30 
 31     public Object perform(String operationName, Object[] args) {
 32         return entityType.getOperation(operationName).perform(this, args);
 
 33     }
 34     //... other methods
 35 }

The Operation interface is used for implementing behavior using Command pattern:

  1 /*
  2  * Operation interface defines behavior
  3  * 
  4  */
  5 package com.plexobject.aom;
 
  6 
  7 public interface Operation {
  8 
  9     Object perform(Entity entity, Object[] args);
 
 10 }

Above meta classes would be used to create classes and objects. For example, the type information of Vehicle class would be defined in EntityType and PropertyType and the instance would be defined using Entity and Property classes as follows. Though, in real applications, type binding would be stored in XML configuration or will be defined in some DSL, but I am binding programmatically below:

  1 /*
  2  * an example of binding attributes and operations of Vehicle
  3  * 
  4  */
 
  5 package com.plexobject.aom;
  6 
  7 import java.util.Date;
  8 
 
  9 
 10 public class Initializer {
 11 
 12     public void bind() {
 
 13         EntityType vehicleType = new EntityType("Vehicle");
 14         vehicleType.addPropertyType(new PropertyType("maker",
 15                 "java.lang.String"));
 
 16         vehicleType.addPropertyType(new PropertyType("model",
 17                 "java.lang.String"));
 18         vehicleType.addPropertyType(new PropertyType("yearCreated",
 
 19                 "java.util.Date"));
 20         vehicleType.addPropertyType(new PropertyType("speed",
 21                 "java.lang.Double"));
 22         vehicleType.addPropertyType(new PropertyType("miles",
 
 23                 "java.lang.Long"));
 24         vehicleType.addOperation("drive", new Operation() {
 25 
 26             public Object perform(Entity entity, Object[] args) {
 
 27                 return "driving";
 28             }
 29         });
 30         vehicleType.addOperation("stop", new Operation() {
 
 31 
 32             public Object perform(Entity entity, Object[] args) {
 33                 return "stoping";
 34             }
 35         });
 
 36         vehicleType.addOperation("performMaintenance", new VehicleMaintenanceOperation());
 37 
 38 
 39         // now creating instance of Vehicle
 40         Entity vehicle = new Entity(vehicleType);
 
 41         vehicle.addProperty(new Property(vehicleType.getPropertyType("maker"),
 42                 "Toyota"));
 43         vehicle.addProperty(new Property(vehicleType.getPropertyType("model"),
 
 44                 "Highlander"));
 45         vehicle.addProperty(new Property(vehicleType.getPropertyType("yearCreated"),
 46                 new Date(2003, 0, 1)));
 
 47         vehicle.addProperty(new Property(vehicleType.getPropertyType("speed"), new Double(120)));
 48         vehicle.addProperty(new Property(vehicleType.getPropertyType("miles"), new Long(3000)));
 
 49         vehicle.perform(
 50                 "drive", null);
 51 
 52     }
 53 }
 
 54 
 55 
 

The operations define runtime behavior of the class and can be defined as closures (anonymous classes) or external implementation such as VehicleMaintenanceOperation as follows:

  1 /*
 
  2  * an example of operation
  3  * 
  4  */
 
  5 package com.plexobject.aom;
  6 
  7 class VehicleMaintenanceOperation implements Operation {
 
  8 
  9     public VehicleMaintenanceOperation() {
 10     }
 11 
 12     public Object perform(Entity entity, Object[] args) {
 
 13         return "maintenance";
 14     }
 15 }
 16 
 17 
 
 

In real applications, you would also have meta classes for business rules, relationships, strategies, validations, etc as instances. As, you can see AOM provides powerful way to adopt new business requirements and I have seen it used successfully while working as consultant. On the downside, it requires a lot of plumbing and tooling support such as XML based configurations or GUI tools to manipulate meta data. I have also found it difficult to optimize with relational databases as each attribute and operation are stored in separate rows in the databases, which results in excessive joins when building the object. There are a number of alternatives of Adaptive Object Model such as code generators, generative techniques, metamodeling, and table-driven systems. These techniques are much easier with dynamic languages due to their support of metaprogramming, higher order functions and generative programming. Also, over the last few years, a number of schema less databases such as CouchDB, MongoDB, Redis, Cassendra, Tokyo Cabinet, Riak, etc. have become popular due to their ease of use and scalability. These new databases solve excessive join limitation of relational databases and allow evolution of applications similar to Adaptive Object Model. They are also much more scalable than traditional databases. The combination of dynamic languages and schema less databases provides a simple way to add Adaptive Object Model features without a lot of plumbing code.

Javascript Implementation

Let’s try above example in Javascript due to its supports of higher order functions, and prototype based inheritance capabilities. First, we will need to add some helper methods to Javascript (adopted from Douglas Crockford’s “Javascript: The Good Parts”), e.g.

  1 
  2 if (typeof Object.beget !== 'function') {
 
  3     Object.beget = function(o) {
  4         var F = function() {};
  5         F.prototype = o;
 
  6         return new F();
  7     }
  8 }
  9 
 
 10 Function.prototype.method = function (name, func) {
 11     this.prototype[name] = func;
 12     return this;
 13 };
 
 14 
 15 
 16 Function.method('new', function() {
 17     // creating new object that inherits from constructor's prototype
 
 18     var that = Object.beget(this.prototype);
 19     // invoke the constructor, binding -this- to new object
 
 20     var other = this.apply(that, arguments);
 21     // if its return value isn't an object substitute the new object
 
 22     return (typeof other === 'object' && other) || that;
 23 });
 24 
 
 25 Function.method('inherits', function(Parent) {
 26     this.prototype = new Parent();
 27     return this;
 
 28 });
 29 
 30 Function.method('bind', function(that) {
 31     var method = this;
 
 32     var slice = Array.prototype.slice;
 33     var args = slice.apply(arguments, [1]);
 34     return function() {
 35         return method.apply(that, args.concat(slice.apply(arguments,
 
 36             [0])));
 37     };
 38 });
 39 
 40 // as typeof is broken in Javascript, trying to get type from the constructor
 
 41 Object.prototype.typeName = function() {
 42     return typeof(this) === 'object' ? this.constructor.toString().split(/[\s\(]/)[1] : typeof(this);
 
 43 };
 44 
 45 
 

There is no need to define Operation interface, Property and PropertyType due to higher order function and dynamic language support. Following Javascript code defines core functionality of Entity and EntityType classes, e.g.:

  1 
  2 var EntityType = function(typeName, propertyNamesAndTypes) {
 
  3     this.typeName = typeName;
  4     this.propertyNamesAndTypes = propertyNamesAndTypes;
  5     this.getPropertyTypesAndNames = function() {
 
  6         return this.propertyNamesAndTypes;
  7     };
  8     this.getPropertyType = function(propertyName) {
 
  9         return this.propertyNamesAndTypes[propertyName];
 10     };
 11     this.getTypeName = function() {
 12         return this.typeName;
 
 13     };
 14     var that = this;
 15     for (propertyTypesAndName in propertyNamesAndTypes) {
 
 16         that[propertyTypesAndName] = function(name) {
 17             return function() {
 18                 return propertyNamesAndTypes[name];
 
 19             };
 20         }(propertyTypesAndName);
 21         
 22     }
 
 23 };
 24 
 25 
 26 
 27 var Entity = function(entityType, properties) {
 28     this.entityType = entityType;
 
 29     this.properties = properties;
 30     this.getEntityType = function() {
 31         return this.entityType;
 32     };
 
 33     var that = this;
 34     for (propertyTypesAndName in entityType.getPropertyTypesAndNames()) {
 35         that[propertyTypesAndName] = function(name) {
 
 36             return function() {
 37                 if (arguments.length == 0) {
 38                     return that.properties[name];
 39                 } else {
 
 40                     var oldValue = that.properties[name];
 41                     that.properties[name] = arguments[0];
 42                     return oldValue;
 43                 }
 44             };
 
 45         }(propertyTypesAndName);
 46         
 47     }
 48 };
 
 

Following Javascript code shows binding and example of usage (again in real application binding will be stored in configurations):

  1 
  2 var vehicleType = new EntityType('Vehicle', {
 
  3     'maker' : 'String',              // name -> typeName
  4     'model' : 'String',
 
  5     'yearCreated' : 'Date',
  6     'speed' : 'Number',
  7     'miles' : 'Number'
 
  8 });
  9 
 10 var vehicle = new Entity(vehicleType, {
 11     'maker' : 'Toyota',
 
 12     'model' : 'Highlander',
 13     'yearCreated' : new Date(2003, 0, 1),
 14     'speed' : 120,
 
 15     'miles' : 3000
 16 });
 17 
 18 vehicle.drive = function() {
 19     }.bind(vehicle);
 
 20 
 21 vehicle.stop = function() {
 22     }.bind(vehicle);
 23 
 24 vehicle.performMaintenance = function() {
 
 25     }.bind(vehicle);

A big difference with dynamic languages is that you can bind properties operations to the objects at runtime so you can invoke them as if they were native. For example, you can invoke vehicleType.maker() to get maker property of the vehicle-type or call vehicle.drive() to invoke operation on vehicle object. Another difference is that a lot of plumbing code disappears with dynamic languages.

Ruby Implementation

Similarly, above example in Ruby may look like:

  1 require 'date'
 
  2 require 'forwardable'
  3 class EntityType
  4   attr_accessor :type_name
 
  5   attr_accessor :property_names_and_types
  6   def initialize(type_name, property_names_and_types)
  7     @type_name = type_name
 
  8     @property_names_and_types = property_names_and_types
  9   end
 10   def property_type(property_name)
 11     @property_names_and_types[property_name]
 
 12   end
 13 end
 14 
 15 
 16 class Entity
 
 17   attr_accessor :entity_type
 18   attr_accessor :properties
 19   def initialize(entity_type, attrs = {})
 
 20     @entity_type = entity_type
 21     bind_properties(entity_type.property_names_and_types)
 22     attrs.each do |name, value|
 23       instance_variable_set("@#{name}", value)
 
 24     end
 25   end
 26   def bind_properties(property_names_and_types)
 27     (class << self; self; end).module_eval do
 
 28       property_names_and_types.each do |name, type|
 29         define_method name.to_sym do
 30           instance_variables_get("@#{name}")
 
 31         end
 32         define_method name.to_sym do
 33           instance_variables_set("@#{name}", value)
 
 34         end
 35       end
 36     end
 37   end
 38 end
 
 39 
 66 
 67 
 68 
 

We can then use Singleton, Lambdas and metaprogramming features of Ruby to add Adaptive Object Model support, e.g.

  1 vehicle_type = EntityType.new('Vehicle', {
 
  2     'maker' => 'String',             # class.name
  3     'model' => 'String',
 
  4     'yearCreated' => 'Time',
  5     'speed' => 'Fixnum',
 
  6     'miles' => 'Float'});
  7 
  8 
  9 vehicle = Entity.new(vehicle_type, {
 
 10     'maker' => 'Toyota',
 11     'model' => 'Highlander',
 12     'yearCreated' => DateTime.parse('1-1-2003'),
 
 13     'speed' => 120,
 14     'miles' => 3000});
 15 class << vehicle
 
 16   def drive
 17     "driving"
 18   end
 19   def stop
 
 20     "stopping"
 21   end
 22   def perform_maintenance
 23     "performing maintenance"
 
 24   end
 25 end
 26 
 27 
 

Ruby code is a lot more succint and as Ruby supports adding or removing methods dynamically, you can invoke properties and operations directly on the objects. For example, you can invoke vehicleType.maker() to get maker property of the vehicle-type or call vehicle.drive() to invoke operation on vehicle object. Also, Ruby provides a lot more options for higher order functions such as monkey patching, lambdas/procs/methods, send, delegates/forwardables, etc. Finally, Ruby provides powerful generative capabilities to build DSL that can bind all properties and operations at runtime similar to how Rails framework work.

Schema-less Databases

Now, the second half of the equation for Adaptive Object Model is persisting, which I have found to be challenge with relational databases. However, as I have been using schemaless databases such as CouchDB, it makes it trivial to store meta information as part of the plain data. For example, if I have to store this vehicle in CouchDB, all I have to do is create a table such as vehicles (I could use Single Table Inheritance to store all types of vehicles in same table):

 curl -XPUT http://localhost:5984/vehicles
 curl -XPUT http://localhost:5984/vehicle_types
 

and then add vehicle-type as

 curl -XPOST http://localhost:5984/vehicle_types/ -d '{"maker":"String", "model":"String", "yearCreated":"Date", "speed":"Number", "miles":"Number"}'
 

which returns

 {"ok":true,"id":"bb70f95e43c3786f72cb46b372a2808f","rev":"1-3976038079"}
 

Now, we can use the id of vehicle-type and add vehicle a follows

 curl -XPOST http://localhost:5984/vehicles/ -d '{"vehicle_type_id":"bb70f95e43c3786f72cb46b372a2808f", "maker":"Toyota", "model":"Highlander", "yearCreated":"2003", "speed":120, "miles":3000}'
 

which returns id of newly created vehicle as follows:

 {"ok":true,"id":"259237d7c041c405f0671d6774bfa57a","rev":"1-367618940"}
 

Summary

It is often said in software development that you can solve any problem with another level of indirection. Adaptive Object Model uses another level of indirection to create powerful applications that meet increasingly changing requirements. When it is used with dynamic languages that support metaprogramming and generative programming, it can be used build systems that can be easily evolved with minimum changes and downtime. Also, Schema-less databases eliminates drawbacks of many implementations of AOM that suffer from poor performance due to excessive joins in the relational databases.

September 21, 2009

Installing Ubuntu Remix and Troubleshooting Network connections

Filed under: Computing — admin @ 10:00 am

I recently ordered ASUS Eeee PC 1005HA netbook that actually got lost in mail and had to reorder. Anyway, I finally received it this weekend and it comes with Windows XP that I decided to replace with Ubuntu. Though, there is a special distribution of Ubuntu called Remix or UNR, but support of netbooks on Ubuntu is still work in progress so it took longer than I expected. Here are the steps I went through to install and setup UNR on my ASUS netbook:

Download Ubuntu Remix

This was easy, I downloaded UNR from http://www.ubuntu.com/GetUbuntu/download-netbook and saved img file on my local netbook (which was running XP at that time).

Download USB Imager

Then, I downloaded USB Disk Imager for windows.

Creating UNR Image

After downloading imager, I opened the application, inserted my USB drive and copied the image, so far so good.

Changing BIOS to boot from USB

The ASUS reboots automatically from hard disk so I had to change the BIOS settings. I shutdown
the machine completely, then started while holding F2. It brought up BIOS settings and I changed the Boot sequence to boot from USB and then saved the settings with F10.

Installing UNR

After rebooting, the UNR loaded from the USB. First, I played without installing and figured out quickly that network isn’t working. I decided to install the UNR despite these issues. I allocated half of disk space about 70G to Linux and left Windows partition alone in case I fail. I then allocated swap space and then proceeded to install, which was fairly standard. After installation, I rebooted the machine and the GRUB loader showed me both Windows and UNR options.

Troubleshooting Network

Now, the fun started. Neither my wired nor wireless network was working. I found a number of forums with similar problems. I tried

 iwconfig
 iwlist scan
 lsmod
 

to see what’s installed and available but didn’t see the drivers. Also, “dmesg” wasn’t helpful and

  sudo /etc/init.d/networking restart
 

didn’t help either. I then typed

 lspci
 

Which showed

 02:00.0 Network controller: Atheros Communications Inc. AR9285 Wireless Network Adapter (PCI-Express) (rev 01)
 

I then switched to my Mac and I then looked for driver of AR9285. I found a good resource http://partner.atheros.com/Drivers.aspx and downloaded Linux driver and then copied to another USB drive.
I built the driver with

 tar -xzf 
 cd src
 make 
 sudo make install
 sudo insmod atl1e.ko
 

After rebooting, it fixed the wired network and I could then use the wired network to continue troubleshooting. I tried following instructions from http://wireless.kernel.org/en/users/Download, which suggested

 sudo apt-get install linux-backports-modules-jaunty
 

But it didn’t work for me. I then tried

 apt-get install linux-backports-modules-$(uname -r)
 

And that didn’t work. Finally, I decided to upgrade to Karmic Koala by issuing this command:

 sudo do-release-upgrade -d
 

It took a while to download all packages, it then removed a bunch of obsolete packages and after reboot complained about a bunch of old configurations that are not compatible anymore. Nevertheless, my wireless started working, yeah. Next, I am going to install Regdb, CRDA, and IW to track any other wireless issues.

I still left option to dual boot on my netbook but I am definitely going to live in UNR for most part.

September 2, 2009

Introduction to CouchDB

Filed under: Computing — admin @ 6:51 pm

I have been following growth and popularity of CouchDB for a while and even attended an excellent talk by J Chris Anderson of http://couch.io. However, only recently I am getting chance to actually use it. I am building an internal Search Engine based on Lucene, but I am storing documents in CouchDB. Though, CouchDB is pretty easy to setup, but its documentation is sporadic. Here are basic steps to get it running:

Installation and Launch

I installed CouchDB on my MacPro notebook using:

 sudo port install couchdb
 

CouchDB is available for Linux distributions and you can use yum or apt to install it, though official binaries are not available for Windows. You can also setup to load it at startup on Mac usng:

 sudo launchctl load -w /opt/local/Library/LaunchDaemons/org.apache.couchdb.plist
 

Once you installed it, you can start the couchdb server using:

 sudo /opt/local/bin/couchdb
 

Alternatively, you can skip installation & launch and instead use hosting solution from http://hosting.couch.io using “booom-couch” password for private beta.

Verify Installation

Once couchdb is started you can point your browser to http://127.0.0.1:5984/ or type in:

 curl http://127.0.0.1:5984/
 

As CouchDB uses JSON format for communication, it would show something like:

 {"couchdb":"Welcome","version":"0.9.0"}
 

Alternatively, you can use curl to communication with couchd server:

 curl http://127.0.0.1:5984/
 

Creating a database

CouchDB is REST based service, and you can review all APIs at http://wiki.apache.org/couchdb/HTTP_Document_API. CouchDB uses PUT operation to create a database, e.g.

 curl -X PUT http://127.0.0.1:5984/guestbook
 

It will return

 {"ok":true}
 

Based on REST principles, PUT is used when adding a new data where the resource is specified by the client. However, if you call this API again with the same arguments, it will return in error, e.g.:

 {"error":"file_exists","reason":"The database could not be created, the file already exists."}
 

Adding documents

Each document is a JSON object that consists of name value pairs. Also, each document is specified a unique identifier or uuid. You can generate uuid in your application or get it from the CouchDB server. For example, to generate 10 UUIDs, call

 curl -X GET http://127.0.0.1:5984/_uuids?count=10
 

and it will return something like:

 {"uuids":["152019530472f7b0b364367bc2ec571d","cba55d13244afe7b924265760deccced","41a8d0d7093ac11827b3147565a08a80","281dc15503fffee17c9da332748e9288","90613ae77c78c8bd81849b728d648055","23c320522473bdd47071d56b72667172","bb8b72a9dc391e95ffd5e155d8bf7011","87b8da3e3cf0c16110e030a711dc26b3","cfdf87adc2cf4593a92e4edf38f2f557","dc80745c5cb478de48230e48efaf5ede"]}
 

You can then add a document using:

 curl -X PUT http://127.0.0.1:5984/guestbook/152019530472f7b0b364367bc2ec571d -d '{"name":"Sally", "message":"hi there"}'
 

It will return verification message:

 {"ok":true,"id":"152019530472f7b0b364367bc2ec571d","rev":"1-3525253587"}
 

Note, it generated a version of the document. Alternatively, you can use POST request to add document using server-generated UUID, e.g.

 curl -X POST http://127.0.0.1:5984/guestbook -d '{"name":"John", "message":"hi there"}'
 

That returns UUID and version of newly created object, e.g.

 {"ok":true,"id":"b4bb85ab50271f3d12d25feb219cb66e","rev":"1-657551114"}
 

Also, you can add binaries such as images to the CouchDB as well, e.g.

 curl -vX PUT http://127.0.0.1:5984/guestbook/6e1295ed6c29495e54cc05947f18c8af/image.jpg?rev=2-2739352689 -d@image.jpg -H "Content-Type: image/jpg"
 

Reading documents

CouchDB uses GET operation to read the document and you pass the id of the document, e.g.

 curl -X GET http://127.0.0.1:5984/guestbook/152019530472f7b0b364367bc2ec571d
 

which returns

 {"_id":"152019530472f7b0b364367bc2ec571d","_rev":"1-3525253587","name":"Sally","message":"hi there"}
 

Updating documents

CouchDB uses optimistic locking to update documents so this version number must be passed when we update document. Also, CouchDB is append-only database so it will create a new version of the document upon updated. For example, if you type same command again you would see:

 {"error":"conflict","reason":"Document update conflict."}
 

In order to update the document, the version must be specified, e.g.

 curl -X PUT http://127.0.0.1:5984/guestbook/152019530472f7b0b364367bc2ec571d -d '{"_rev":"1-3525253587", "name":"Sally", "message":"hi there", "date":"September 5, 2009"}'
 

This will in turn, create a new version and will return:

 {"ok":true,"id":"152019530472f7b0b364367bc2ec571d","rev":"2-1805813096"}
 

Deleting document/database

You can delete a document using DELETE operation, e.g.

 curl -X DELETE http://127.0.0.1:5984/guestbook/b4bb85ab50271f3d12d25feb219cb66e -d '{"rev":"1-657551114"}'
 

Similarly, you can delete a database using:

 curl -X DELETE http://127.0.0.1:5984/guestbook
 

Querying Documents

CouchDB uses Javascript based map and reduce functions to query and view documents, where map function takes a document object and returns (emits) attributes from the document. Here is simplest map function that returns entire document:

 function(doc) {
       emit(null, doc);
 }
 

Here is another example, that returns names of people who posted to guestbook:

 function(doc) {
     if (doc.Type == "guestbook") {
         emit(null, {name: doc.name});
     }
 }
 

Reduce function is similar to aggregation functions in most relatinal databases, for example to count all names you could define map function as

 function (doc) {
     if (doc.Type == "guestbook") {
         emit(doc.name, 1);
     }
 }
 

and reduce function as

 function (name, counts) {
     int sum=0;
     for (var i=0; i<counts.length; i++) {
         sum+=counts[i];
     }
     return sum;
 }
 

All Databases

You can list names of the database using:

 curl -X GET http://127.0.0.1:5984/_all_dbs
 

You can also get all documents for a particular database (guestbook):

 curl -X GET http://127.0.0.1:5984/guestbook/_all_docs
 

CouchDB also comes with a web based Futon application to create, update, and list databases and documents, simply go to http://127.0.0.1:5984/_utils/ and you will all databases in the system.
You can also control replication from that UI, which is pretty handy. Also, you can poll database changes using:

 curl -X GET 'http://127.0.0.1:5984/guestbook/_changes?feed=longpoll&since=2'
 

Also, you can get statistics using:

 curl -X GET http://127.0.0.1:5984/_stats/
 

And Config via:

 curl -X GET http://127.0.0.1:5984/_config
 

Replication

CouchDB is written in Erlang and uses many of internal features of Erlang such as replication of databases (that use Mnesia). In order to replicate, just create a database on another server, e.g.

 curl -X PUT http://127.0.0.1:5984/guestbook-replica
 

Then replicate using:

 curl -X POST http://127.0.0.1:5984/_replicate -H 'Content-Type: application/json' -d '{"source":"guestbook", "target":"http://127.0.0.1:5984/guestbook-replica"}'
 

Security

You can add user/password based basic authentication by editing /opt/local/etc/couchdb/local.ini file. You will then need to pass user/password when accessing CouchDB server, e.g.

 
 curl -basic -u 'user:pass' -X PUT http://127.0.0.1:5984/guestbook
 

Summary

I just started using CouchDB and I am still learning more advanced features and its capabilities in enterprise level environment. Though, it looks very promising, but I am keeping Berkely DB in the back pocket in case I run into severe issues.

August 15, 2009

Releasing Wazil.com

Filed under: Computing — admin @ 11:30 am

I just finished a brand new website Wazil.com and companion facebook app for posting yellow pages and classifieds. I am working on starting a local communities for this website that will show local search results based on your location. Please give it a try and post me your comments and suggestions.

« Older PostsNewer Posts »

Powered by WordPress