Tuesday, December 30, 2008

Getting available screen space for web pages

One of the most common requirements when creating a web page is to get the
space available in the browser window where the content will be displayed. It is always preferable to fit the content into the available space to prevent scroll bars and reduce the effort required on the part of the user. A number of factors cause variations in
the width and the height available for the content of the web page, some of them

  1. Windows Taskbar
  2. Status bar
  3. Toolbars (for eg., MSN toolbar, Google toolbar etc)

Because the user can have any combination of these things enabled, it is difficult to get the exact space and height available. Add to this the fact that different browsers support different sets of properties of the BOM and even for the common set of properties, they give different meanings. Hence, we need to combine multiple approaches to get the available space so that it works in almost all browsers. The code that will do exactly this is given below :

//Returns an array containing the available width and height
function GetWindowSize()
........var myWidth = 0, myHeight = 0;
........if (typeof (window.innerWidth) == 'number')
............myWidth = window.innerWidth;
............myHeight = window.innerHeight;
........else if(document.documentElement && (document.documentElement.clientWidth
............//IE 6+ in 'standards compliant mode'
............myWidth = document.documentElement.clientWidth;
............myHeight = document.documentElement.clientHeight;
........else if (document.body && (document.body.clientWidth
............//IE 4 compatible
............myWidth = document.body.clientWidth;
............myHeight = document.body.clientHeight;
........return new Array(myWidth,myHeight);
....catch (ex)
........return null;

The above code should suffice for almost all browsers and different browser versions. It is not advisable to use "Screen.availWidth" and "Screen.availHeight" since it doesn't return the correct values in case the user is using toolbars. I have tested the above code for IE and FireFox with and without toolbars and it worked like a charm.

Monday, December 29, 2008

Preventing JavaScript injection attacks

Whenever one is developing web pages that accept input from the user, one needs to be extra careful in order to prevent the extremely dangerous XSS (Cross Site Scripting) attacks. Typically, these attacks are carried out by users who input malicious scripts in the input fields provided on the web page. The scripts get executed when the page tries to display back the values that were input by the user. For example, if the user is presented with a field for entering his name, he may enter something like

ABC;<script>alert("This site is hacked")</script>

Now, whenever the site displays this user's name, the script also gets executed and the viewer of the page will get an alert box proclaiming "This site is hacked". Although this is a contrived example, there is the potential to do much more harmful things using the same technique. Skilled users can write scripts that can read data from cookies, for example, and send to another site (hence the name cross site scripting).

In order to prevent such things from happening, all one has to do is "Server.HTMLEncode" all the values that the user provides, assuming you are using ASP.NET. There are 2 places where you can do the encoding :

  1. You can do the encoding while storing the data in the database so that anytime those values are to be displayed to the user, they won't cause any problems.
  2. You can also do the encoding just before showing the values to the user. This technique can be used when you are just displaying the values without ever storing them in the database or in case you need to display the values before storing them in the database

What Server.HTMLEncode does is it converts all the special characters such as "<" and ">" to their ASCII equivalents so that the browser runtime engine interprets them as normal characters instead of interpreting and executing them. You can find the complete list of actions taken by this method here.

Remember to encode any and all values you accept from users to avoid having serious XSS attacks launched on your website.

Sunday, December 28, 2008

Weird behaviour of "onchange" event of Listbox in FireFox

The HTML listbox has an "onchange" event which is fired whenever the item selected in the listbox has been changed. The contents of the listbox can be changed either by clicking the corresponding item from the listbox or by using the up and down arrow keys or pressing the beginning alphabet of any of the items when the listbox has focus. Ideally, even when changing the selected item by using the keyboard, the onchange event should be fired. In fact, the event is fired in all major browsers except Firefox. With Firefox, the onchange event is called only when the listbox loses focus.

Suppose you have a form whose contents you would like to change based on the value selected in the listbox. For example, have different fields on the form based on whether the user is interested in Sports, Music, Dance etc. You can do this by capturing the item selected by the user in the onchange event and making the necessary changes by manipulating the DOM. However, you would have a problem with users using FireFox. To be able to capture the value every time the user makes a new selection, we need to add some javascript for the "onkeyup" event. The code needed is,

<select id="items" onchange='itemSelected();' onkeyup="this.blur();this.focus();">

All that needs to be done is to momentarily remove focus from the listbox so that the onchange event is called and then, set the focus again. Setting the focus again on the listbox is needed because, if we don't do that, the user will have the set the focus explicitly on the listbox every time he changes it's value with the keyboard - he wouldn't appreciate this if he wants to select a value which is 4-5 values further in the list from where he's currently at.

I spent nearly 2 hours trying to figure out why the onchange event wasn't getting called in FireFox. Hope this post will go some way in helping you reduce the debugging time in case you happen to come across a similar situation.

Sunday, December 7, 2008

Calling Web Services from Javascript

One of the most powerful features of ASP.Net AJAX is the ability to call server side web services seamlessly from plain Javascript. There is nothing special that needs to be done to create a web service that can be called from JavaScript apart from adding the "ScriptService" attribute at the beginning of the web service declaration. We can use ASMX as well as WCF web services with ASP.Net AJAX. Let's take a look at the steps needed to call web services with AJAX.
  • Server side

On the server side, we will create a normal ASMX web service.

  1. Right click your project in Visual Studio and click "Add new item". In the dialog that opens up, select "Web service" and give it an appropriate name
  2. Once the web service has been added, open it's .cs file. At the very top, before the class declaration there will be the following commented attribute
  3. [System.Web.Script.Services.ScriptService]

    Uncomment this attribute.

  4. Add the methods you want to call from the client side and annotate them with the [WebMethod] attribute
  5. Test the web service and ensure that the web methods are working fine
  • Client side

On the client side, we will need to do the following to be able to call the web service created in the above steps

  1. Add a ScriptManager tag at the top of your page. The script manager is required on every page that should provide AJAX functionality. It instantiates the PageRequestManager class and handles the downloading of all the necessary script and service proxy files. (If you are using master pages and the ScriptManager is present in the master page, you can add a ScriptManagerProxy tag to the current page)
  2. Within the ScriptManager tag, add a ServiceReference and provide the path to the web service. It should look something like this,
  3. <services>
    <asp:servicereference path="~/Sample.asmx">

    Here, I am adding a reference to a web service named "Sample.asmx" which is present in the same project. What happens under the hood is that, for each service that you reference with the ServiceReference tag, a proxy is created and downloaded on the client side and this proxy is used to validate the calls from JavaScript to the web service.

  4. Once this is done, we can call a web method from JavaScript as follows :
  5. WebServiceClass.WebMethod(parameters, Success Callback function, Failure callback function, context);

    parameters : We can specify as many as needed, or even 0, parameters to be passed to the web method, each separated from the other by comma

    Success Callback function : We specify name of the JavaScript function that will be called when the web method completes execution

    Failure Callback function : We specify name of the JavaScript function which will be called in case the web method encountered some error during execution

    context : This is an optional string value that we can provide. It is used in case we are using the same JavaScript function as callback for multiple web methods so that in the JavaScript function, we can determine the web method call for which the callback was invoked.

  6. The final step is to code the success and failure callbacks, syntax for which is as follows :

function SuccessCallback(response, context, method)

function FailureCallback(error, context, method)


response : It contains the values returned by the web method. It can be as simple as an integer or a string or as complex as JSON representation of List or even a custom object. The values can be accessed using the normal object.property syntax.

error : Object representing the error that occurred in the web method. The most useful method of this object is get_message() which returns a description of the error that occurred

context : Optional value, which will have the same value as provided as the last parameter when calling the web method

method : Name of the web method from which the current JavaScript function has been invoked

Having the ability to call web services asynchronously from the client provides tremendous flexibility as well as performance advantages to the programmers and allows us to create really powerful and performant sites with ease.

Ignoring certain characters from user input in HTML Textbox

One common requirement when creating user input forms in websites is to not show some of the characters in a text box. For example, we would not want to show alphabets in input fields for dates and age. Similarly, we would not want to show numbers in an input field for name.

As it turns out, this is really easy to do with some simple JavaScript. We can accomplish this with the following steps :
  1. Identify the characters that need to be ignored (Black listing approach)
  2. Add an event handler for the key down event on the text box
  3. In the event handler, check whether the input character is one of the characters that needs to be ignored. For these characters, just return false from the event handler. This will prevent that particular character from reflecting in the text box
  4. Optionally, one can show an error message if one of the unwanted characters is input. This is done to avoid the user from getting confused when his inputs don't cause any changes in the contents of the text box

Here is the code for declaring text field in HTML :

<input id="'startDate'" type="'text'">

We can add the event handler in the HTML tag itself or in a Javascript function which will be called on the "onload" event of the body as follows :

document.getElementById('startDate').onkeydown = CheckNumber;

Here, we are using the "Key Down" event of the text box to check for the character input by the user. Finally, the event handler function code is as follows :

function CheckNumber(e)
........if (!e)
............e = window.event;
........var keynum;
............//IE : Get the code of the key the user pressed
............keynum = e.keyCode;
........catch (err)
............//Netscape/Firefox/Opera : Get the code of the key the user pressed
............keynum = e.which;

/*Prevent any character not in the range 0-9 and not '/', backspace, delete or left and right arrow keys, Tab and Shift+Tab. ASCII Code : / is 191, Backspace is 8,Delete is 46, Left : 37, Right :39, Tab : 9, Shift + Tab: 16*/

........if (keynum != 191 && (keynum <> 57) && keynum != 8 && keynum != 46 && keynum != 37 && keynum != 39 && keynum != 9 && keynum != 16)
............//Show error to the user to inform him that the input was invalid
............document.getElementById("dateErrorMessage").style.display = "inline";
............//Return false to prevent the character from being added to textbox
............return false;
............//Remove error message if it's visible
............document.getElementById("dateErrorMessage").style.display = "none";
....catch (ex)
........alert("Check number : " + ex.message);

As can be seen from the comments, we first retrieve the ASCII code for the character typed by the user (as expected, we have 2 different ways of getting the value based on the browser type) . Then we check against the characters we will allow (I am using white listing here, one can use blacklisting also, as mentioned earlier. The right approach will depend on the number of conditions that need to be specified but generally white listing is preferred for 2 reasons. It will cause fewer problems in case you miss out on some conditions and this mistake will be easier to catch during testing. If the character doesn't fall within our white list, we show an error message on a label and then return false to prevent it from appearing in the text box. Otherwise, we do nothing so that the character appears normally in the text box.

So there you have it, a texbox that accepts restricted inputs. Customise for the type of inputs you want to allow/disallow.

Sunday, November 9, 2008

Presentation Tips

I know there are umpteen number of tips out there about how to give presentations, how to prepare for preparations etc. I don't intend to repeat the same stuff here. Instead there is 1 point that is very important, at least in my point of view, and which I haven't seen in any of these lists.

According to me, one should always try to give the presentation from his/her own laptop/desktop. This case comes into the picture when you have a long presentation which will be taken by multiple speakers in parts. The reason giving presentations on another machine can be problematic is that you don't know the locations of most files and folders, how some things work/don't work on that machine, quirks thrown up by that machine on certain user actions etc. It wouldn't be a good experience to look flummoxed in front of your audiences when you can't find a particular window or folder :-)

So, in my opinion, using your own machine is one more stepping stone towards having a successful, smooth presentation

Tuesday, October 28, 2008

Importance of sending Status Reports

In this post I would like to highlight the importance of having a habit of sending regular status reports to your manager and also, if need be, to your immediate team.

I generally send out the report on Friday evenings so that the manager has an idea of what happened during the week, before that week ends. One can also send the report on Monday mornings but then, that would, theoretically at least, mean you are communicating to your manager a tad late. Status reports that I send out contain 3 main sections :
  1. Activities completed this week : What were the things you worked on and managed to complete this week (do not include items that you are still working on and haven't completed, those will go into the next section)
  2. Actions items for next week : What things are you planning to work on and/or complete in the following week
  3. Blocking issues : This is the most important section that you should send out to your manager. Communicate clearly issues that restrict your progress and follow up on them to get them sorted out asap

Given these sections, one can either send out a simple list of activities for each section or have multiple columns under the first 2 sections to be more elaborate. The columns that I include under the first 2 sections are :

  1. Activity : The activity that you completed/are going to complete in the following week
  2. Effort : An indication of the effort needed for completing an activity (if that activity falls under the first section) or an estimate of the effort that will be required(for activities that are part of the second section)
  3. Main challenges/points : Highlight the main technical issues that had to be /will need to be addressed/solved to complete the stated activity

The big question in your mind would be, why should I waste somewhere around half an hour every week in sending this to my manager, who might not even give it anything more than a cursory glance. Well, here are some of the reasons why that half an hour will be a time well invested :

  1. When you sit to jot down the activities, it helps you to get an idea of how much work has been done and allows you to get an understanding of how efficient you are being at what you are doing. You will be able to catch early signs when there is a need for you to pull up your socks and work harder.
  2. Jotting down the action items for the coming week streamlines your work and thinking
  3. Highlighting the blocking items gives you a better chance of getting it resolved sooner so that you can get back on track with your work
  4. If you organise your status reports in a separate folder (like I do using Outlook rules), you can just take a look at the status reports to get an idea of what all work you did during any given time period. This will prove priceless when you sit down at the year end to fill up your performance review.
  5. Finally, and probably most importantly, it's your chance to show off your efficiency and abilities to your manager. Thanks to this post for highlighting this point

I agree that some of these advantages are already inherent in software development methodologies like Scrum, but there are plenty of other reasons, as can be seen from the list above, for you to use regular status reports. Feel free to add more advantages or some of the best practices you follow when it comes to status reports. Also, share your views if and why you think that sending status reports is a waste of time.

Sunday, October 5, 2008

Dataset v/s DataReader

In this post, I will be discussing the behavioral characteristics and the performance differences of the DataSet and the DataReader as well as indicate the suitability of use of these objects in various scenarios.

The Dataset is a "disconnected" data store. What this means is that the DataSet object need not maintain a connection with the database at all times, a connection is needed only at the time of fetching data and updating it. The DataSet can be populated with data using something like this ,

SqlConnection conn=new SqlConnection();
conn.ConnectionString="Data Source=.;Database=TempDB;Integrated Security=true;";
SqlDataAdapter da=new SqlDataAdapter("select * from Temp",conn);
DataSet ds=new DataSet();
//Process data in the DataSet ds

As can be seen from the above snippet, once the data has been read into the DataSet, the connection can be closed immediately. The data can still be accessed from within the DataSet.

DataReader, on the other hand, is a "connected" data store which means that there needs to be a connection maintained to the database in order to be able to access the values in the DataReader. The DataReader can be populated with data using something like this ,

SqlConnection conn = new SqlConnection();
conn.ConnectionString = "Data Source=.;Database=TempDB;Integrated Security=true;";
SqlCommand cmd = conn.CreateCommand();
cmd.CommandText = "select * from Temp";
cmd.CommandType = CommandType.Text;
SqlDataReader rdr = cmd.ExecuteReader();
while (rdr.Read())
//Process the data read from the DB

Notice here that the connection is closed only after iterating through the entire record set returned by the query.

As can be seen from above, a DataSet, although providing a lot of flexibility in terms of usage scenarios, is memory intensive. Since it is a disconnected data store, it stores all the data read from the database in memory. As can be inferred, this can really slow down the application if the DataSet is populated with millions of rows of data. On the other hand, the DataSet provides "random" access : any record stored in it can be directly accessed and records can be accessed in any order desired. One can also go back and forth through the DataSet records. Another important flexibility with DataSets is that data within it can be modified and the updates will be percolated to the database automatically. Thus, a DataSet is suitable for scenarios in which data update is needed or where random access is needed but should be used cautiously when huge data is being fetched from the database.

As for the DataReader, it is almost an opposite of the DataSet. Since it maintains an open connection to the database at all times, it needn't store data in local memory. Instead records are read in chunks on a need basis. Thus, it proves to be pretty efficient in terms of memory usage. However, this efficiency comes at a cost : records within a DataReader can be traversed only forward and that too, only once. If a record needs to be read a second time, the query needs to be executed again as there is no provision to move backwards through the DataReader. Also the data read through the DataReader is read only. Thus, a DataReader lends itself to scenarios where huge amounts of data are being read without having the need to update them or have random access over that data.

Accessing return value of SPs

When working with databases, it is recommended that all the database queries should be moved to Stored Procedures (hereafter referred to as SP). This makes perfect sense because having all the queries in one place makes it easy to debug in case the database integration is not working as well as makes it less cumbersome to make changes (since it is known where those changes need to be made and the changes need not be duplicated in multiple code files). Given this best practice, it is often a requirement to capture the return value of the SP in order to determine whether the SP execution succeeded or failed.

ADO.NET has the SqlCommand and SqlConnection objects to allow the user to invoke the SP and have access to the results returned by the SP. Specifically, there are 3 methods that can be used to execute a query or an SP on the database :
  1. SqlCommand.ExecuteNonQuery() : Used for queries which don't return any value, i.e. insert, update and delete queries
  2. SqlCommand.ExecuteScalar() : Used for queries which are guaranteed to return a single value
  3. SqlCommand.ExecuteReader() : Used for queries that return multi column and/or multi row result sets

There is 1 important difference in the use of these 3 methods when it comes to accessing return values. It turns out that when using ExecuteScalar() and ExecuteNonQuery(), we can access the return value immediately following this method call whereas, for the ExecuteReader() method this is not the case. If we try to access the value of the parameter object created for the return value, it will have a null value. The return value is set only after we iterate through the entire result set returned by the reader object. The reasoning behind this behaviour can be as follows. If the result set was returned successfully, it means the SP succeeded so there's no point of checking the return value. It only makes sense to check the return value in case there was no result returned. The return value will then enable us to determine whether there was actually no data in the database for the given query or there is a bug which caused incorrect results to be returned.

Do keep this slight variation in the behaviour of the ExecuteReader() the next time you use it.

Sunday, September 21, 2008

Javascript : Creating Textbox watermarks

One of the common requirements when working with web page forms is to have some default text show within a text box and make it disappear when the user enters something in the text box, in other words, create a text box watermark.

Now if you are working with ASP.NET then an easy way to do this is to use the AJAX Control Toolkit's 'TextboxWatermark' control and just link the AJAX control to the text box in which we want the watermark to appear. The AJAX control provides a number of options to control the watermark. However, if you are working with pure HTML pages then, you are stuck with Javascript and need to figure out a way to control the watermarking. I recently wrote some code to do the same and would like to share it in this post.

Assuming you have a html page added to your project, the following steps would enable the watermarking of the text box :

  1. The first step obviously is to define the basic HTML tag which will add a normal text box to your page. Here's a sample code for this :

    <input id="txtUsername" name="txtUsername">

  2. The next step is to define a variable which will store the string that will act as the watermark (Thanks Pratik for sharing this best practice with me). The advantage of creating a variable is that it makes it easy to perform comparisons (which will be mentioned in a later step) and also requires you to change the watermark text in only one place.

    var gcUsernameTbDefaultText='Please enter a username';

  3. The trick to have the watermark appear and disappear is to capture the 'onblur' and 'onfocus' events and manipulate the contents of the text box. The following code snippet shows functions being called for these 2 events.

    <input type="text" id="txtUsername" name="txtUsername" onfocus="clearTextbox('txtUsername',gcUsernameTbDefaultText);" onblur="showDefaultText('txtUsername',gcUsernameTbDefaultText);" />

    As you can see, the functions called when the events occur are passed the id of the text box on which the watermark is needed and also the watermark text, making these functions generic and usable with any text box.

  4. The last and final step is to implement the 2 functions that do the actual manipulation. So here they are :

    • function clearTextbox(objId,text)
      var obj = document.getElementById(objId);
      //Clear the textbox only if the textbox contains the original default value
      if (obj != null && obj.value.toString().toLowerCase() == text.toLowerCase())
      obj.value = "";
      catch (e)

    • function showDefaultText(objId,text)
      var obj = document.getElementById(objId);
      if (obj != null && obj.value == "")
      obj.value = text;
      catch (e)

So there you have it, simple text box watermarking. There are 2 improvements one can do on this implementation :

  1. Dynamically apply styles in order to show the watermark text in gray, with some sort of transparency and have the normal text with normal appearance
  2. This approach won't work with 'password' text boxes since even the watermark will appear as asterisks. This post talks about implementing watermarks for password text boxes

Review : Dropbox

I recently came across a new file sharing service called Dropbox. The service is currently in beta and comes with an optional client side software which makes working with the service really simple.

The service comes with a 2GB space limit for free usage, with plans for providing more storage at different pricing options. The site is really easy to use and pretty fast (they have made good use of AJAX to make as few page refreshes as possible). The single, most prominent, differentiating factor between dropbox and the other tons of file sharing services out there is the desktop app that comes with the service. A folder is created as part of the local file system and works just like any other local folder : you can seamlessly drag and drop files and folders into the dropbox folder and the changes will be synced to their servers and eventually to other users connected to the shared folders. If you have the system tray icon running, you will also get notifications anytime your dropbox folder contents are changed by other users.

This desktop app makes it really easy to work with the service and cuts down on the learning curve. This folder contains 2 sub folders by default : Public and Photos. The public folder is supposed to contain the files and folders that you want anyone to be able to access. For each file in this folder, dropbox generates a link which can be sent out to anyone (Even non-dropbox users) and they would be able to access the files. The Photos folder represents a photo gallery. Each sub folder in here will become an album on the web interface and you can drop any photos within these folders. The service has been built on top of the incredibly useful Amazon's Simple Storage Service (S3).

Here are the pros and cons of the service in my view :

Pros :
  1. The web interface is really fast and easy to use, complimented nicely by the desktop app, making the service extremely user friendly
  2. The site uses SSL for transferring any content over the Internet, making the process safe and secure
  3. Dropbox supports all 3 major OSes : Windows, Mac and Linux
  4. It uses the "Delta sync" technique to sync the files and folders. What this means is that the entire file isn't uploaded every time it has been changed. Only the parts that have been changed are uploaded thereby saving precious bandwidth and reducing syncing times

Cons :

  1. The storage limit of 2GB is pretty limiting, specially if you are planning to store images and audio/video content
  2. It doesn't handle conflicting changes really well currently, taking only the first change and requiring users to manually resolve conflicts

Overall, I found the service really useful and simple to use. Instead of trying to implement too many features, the dropbox team has given importance to simplicity and they have come up with a service which does minimal things, but does them really well. I would definitely recommend people to at least give it a try, at least I am going to keep it installed and will use it whenever I need to share stuff with my friends. It is much easier than going to a website and uploading files from there and sending the resultant links through email.

Let me know your experience with the service and what you liked/disliked about it

Saturday, August 30, 2008

Review : Viewzi

In today's day and age, search is becoming more and more critical to Internet users. As the amount of data on the Internet is growing exponentially, it's becoming very difficult to find exactly what you need since there are so many variations of everything on the net. Add to this the potential revenue to be earned via ads and you have one very lucrative field (Online ad revenues are estimated to be around $80 billion by 2010).

The success of Google and the above mentioned factors have all led to a proliferation of search engines : each one promoting itself as the next potential Google killer. However, it is not very easy to create a scalable search engine which is also good at returning relevant results. One needs a very good indexing algorithm, just having enough infrastructure is not enough. I have seen and tried a number of search engines including (and the now infamous) Cuil which went down on the very first day even after generating a lot of hype before its launch. Another one, which is pretty useful but whose usefulness is currently just limited to Wikipedia, is the semantic search engine Powerset. Semantic means that the search engine is able to "understand" the contents of the web pages and hence, allows users to query in natural language rather than the user having to frame his query in a way that the search engine can understand. Powerset has shown potential and Microsoft's acquisition of the company is a testament to this.

However, the one search engine I've liked the most (amongst the latest ones of course :) ) is Viewzi. It has an extremely appealing visual interface which certainly impressed me the first time I tried it. Instead of the normal hyperlink (famously called the "ten blue link") design, it has a number of different "views" (actually it has the "ten blue link" design as one of the views, for the traditional people I suppose). Each of the views has a number of sources, for example, for the "Simple text view" the sources used are Yahoo and Google. The site has about 15 different views (talk about spoiling the customer for choice). Some of my favourite views are :
  1. Video view : Sources are youtube (has to be), blinkx and veoh. The results appear as 3 horizontal bars across the page and are very intuitive to use
  2. MP3 view : This view uses seeqpod and mp3 realm as its sources. The best thing about this view is that you can play songs directly from the search page. Here is an example search for Shaan
  3. Timeline view : This is a very informative and at the same time attractive way of looking at results. The results are laid out from left to right along a timeline and scrolling at the ends will move the timeline accordingly. One can also look at a glimpse of the article by clicking on one of the results. Just check out this view for Sachin Tendulkar, its amazing

Here are the pros and cons of this tool, in my opinion :

Pros :

  1. The visual attractiveness has to be the USP for this site. I have not seen any other search engine come anywhere close to its ease of use and interactiveness
  2. The results are fairly good for common queries
  3. There are no ads (at least not yet)

Cons :

  1. To be a real threat to Google and be a major contender in the search engine industry, the site needs to add more sources for getting better results
  2. One major limitation of the service currently (and something which they will hopefully fix soon) is an absence of a toolbar. all the other major search engines have one, even new ones like Scour. Toolbars make it really easy for the user to perform searches without disrupting his current page. I think it would be even more useful for viewzi given its multiple views options.

I think that there has to be something disruptive like viewzi or the vertical search engines which can break the near monopoly of Google in the search business and make search a level playing field again. The only concern for me is that Google can just as easily buy such engines for a huge amount and increase its monopoly. Let's hope Microsoft or Yahoo gets to viewzi first and maybe that will narrow the gap in the search market share.

Review : Photosynth

Microsoft recently released its much awaited technology called Photosynth. This is a very cool technology developed by Microsoft Research in association with University of Washington. The name is derived from "photo" which means picture and "synth" which is a short form for a "musical synthesizer" i.e. an instrument used to create music. Photosynth allows users to create a 3D model from a collection of "related" 2D photos. The users can then zoom in and out of the 3D picture as well as move in all directions.

The tool works by identifying a set of common areas between images and then stitching together those areas to form a 3D model of the scene. All that one needs is a decent camera and some ingenuity to be able to shoot a series of photos that resemble each other. In addition, one needs a decent Internet connection to be able to upload all those pics onto the site. Microsoft has used this technology to create another similar application which also creates a 3D model of a structure but has additional powerful features such as balance out the lighting effects between different photos to create a smoother model. Photosynth has already been featured on CSI:Miami where it was used by detectives to solve a case and talks are on between Microsoft and NASA to use this technology at the International Space Station for finding out defects in the ISS body. The service has proven really popular, so popular in fact that the servers were overwhelmed on the first day itself and service went down for a few hours.

Here is how to use this service :
  1. Navigate to Photosynth
  2. Log in with your live id and create a profile which basically requires you to specify a name and description for the synth (yes that is what they call the 3D model :) ) you are trying to create
  3. Once logged in, browse your desktop and upload the photos from which you want the synth to be created and wait till the images are uploaded and processed.
  4. Once done, the synth will be displayed. You can check mine although evidently, it's not too synthy (only 20%)

Here are the pros and cons of the service, in my opinion :

Pros :

  1. It is a really cool tool to play with and use to share photos with your friends and family
  2. It has very little client footprint ( only a plugin needs to be installed). Much of the processing happens on the cloud.
  3. It's decent in terms of speed, much of it depends on your Internet connection speed but the processing seems pretty fast
  4. You can tag the synths which makes searching easier

Cons :

  1. The fact that it has a very little client footprint is also a disadvantage since one needs to be connected to the Internet to be able to use it
  2. There is no option to save your creations onto the desktop ( Hopefully this will change once the product matures )
  3. All creations are public which means anyone can see any one's synths
  4. It works only on PCs right now (Support for the Mac is expected soon)

However, the product is still in its infancy and we can expect Microsoft to add more features (most importantly : Saving synths and authorization ) in the near future. All in call, I found it a lot of fun to work with and a nice way to share your experiences with your family and friends and help them relive it in 3D