Thursday, July 29, 2010

Cook with what you just got


We are glad to introduce a new feature to identity all the matching recipes based on what ingredients you have right now. This is first of the kind for Indian Recipes.

We call this feature as MyKitchen


Its an neat simple feature and basically starts with the list of comma seperated ingredients ex: carrots, onion, peas and submit. We parse the ingredients and match it with Recipes.
Give a try and let us know your comments. http://fullmeals.net/mykitchen.aspx

Monday, July 12, 2010

Enable gzip compression for IIS 7 (WCF requests)

GZIP is the compression standard by which you can compress the HTTP requets between the Client/Server so that you will save Bandwidth and will have snappy pages. We have implemented these in Static contents like CSS,JS and Static HTML pages.

But we had a tricky situation when we want to implement this in WCF reqests which is part of the upcoming API for fullmeals site.

Google search gets a lot of different implementations of the GZIP in IIS 7(on which our API runs)

http://www.google.com/search?q=wcf+gzip+iis7&sourceid=ie7&rls=com.microsoft:en-us:IE-SearchBox&ie=&oe=

But a lot of these need new WCF code changes. We tried most of these and able to get with the below easy steps. No code changes necessary.

1. Make sure the compression module installed in IIS 7 for your site. Enable both Static/Dynamic compression.





2. Edit the C:\Windows\System32\Inetsrv\Config\applicationHost.config file on the Webserver and modify the as below











3. Restart IIS and thats it.

We we able to get this working for both XML/JSON on REST calls

Let us know your comments.

fullmeals dev team

Sunday, July 11, 2010

Solve URL restricted by robots.txt errors

Our site is brand new and we have been carefully watching the Google Bot Crawl statistics from Day 1 and lately we have been noticing a lot of Crawl Errors which says












"Restricted by Robots.txt Detail URL Restricted by robots.txt"


Its strange and clear that all the pages listed on the errors are blocked in robots.txt. Here is the sample from our robots.txt file.

Disallow: /signup.aspx
Disallow: /SignUp.aspx

Disallow: /managerecipe.aspx
Disallow: /managerestaurant.aspx
Disallow: /managebrand.aspx
Disallow: /manage.aspx
But still google crawled them from the other content pages and ended up in throwing errors.
After reviewing the content pages we indentified that every content page has a link to the signup.aspx and thats why its ended up in crawling them.

But luckily google has a way to prevents some links to be crawled by adding rel="nofollow" to the html hyperlinks in the below format



We added them in all pages with the links to Signup and now google seems happy and not seeing errors in the Crawl.


Let us know your comments as well