Nima Dilmaghani’s Technology Blog

How best to announce a new technology or how Google does things differently

Usually, technology companies make significant releases of first versions of their technologies or major upgrades at large technology conferences that they closely control. Apple announced and for the first time publicly demoed the iPhone at Mac World in 2007.  Microsoft announced and released early access to Windows Azure at the Professional Developers Conference in the fall of 2008.  Similarly, Microsoft will make several technology releases at the Mix conference this week.  This is usually done in a setting where all eyes are on the company.  Everything is very carefully prepared and the demos, the interviews, the sound bites, and the method of distribution of early access is carefully crafted and rehearsed.  Press is in abundance at these conferences and hungry for the headline that is sure to come.  This practice stems from how information was distributed in the past.  Both the regular media and the industry press would be at the event in force ready for the news story perfectly crafted by the technology company.  The big launch of Technology XYZ, done properly, would guarantee a good story in that night’s TV news program, big coverage in the next day’s paper, and in the weekly and monthly magazines that would follow.  The marketing experts at the technology companies are usually headed by MBA types with business backgrounds.  Their training and experience tells them that the more positive media coverage they get the more successful they have become.

However, the question to ask is if this is the best way to get the news out to the developers, the people who will actually build with these technologies going forward.  Google clearly thinks not.  Like in many other areas, Google has been doing things differently.  Google did not announce App Engine at Google I/O last year.  Instead they chose to announce it about a month before the big conference, at a small invite only event called Google Campfire One.   By the time developers arrived at Google I/O a month later, they had already played with the product and could ask the more advanced questions.  While this approach does not provide the media splash that the traditional approach delivers, it is inherently better for the developers and for the platform.  It gives the community several weeks to play with the bits, understand them, and come to the conference with informed questions and feedback.  The presentations can now range from beginner level to advanced topics, and the end product of the conference is customer developers that can better use the platform and build fantastic products on top of it.  In the end, it is not how much media splash a technology gets that defines its success or failure.  Sadly it is not even the quality of the technology itself.    It is the quality, and usefulness of what developers build on it.  Unfortunately, in traditional technology companies, the metrics of marketing success and failure have been the number of articles published and more recently the number of related blog posts and tweets.  Not how quickly we can get the community of developers to install, learn, and build on the new platform.  

I strongly suspect that Google will do the same this year.  Watch out for Camp Fire Two in the next few weeks where Google will make their announcements and releases.  And learn, learn, learn before coming to Google I/O where you can take your knowledge to the next level.  Why would Google change a working formula that no other technology company has even groked?

There is another reason other companies have not groked this approach.  Traditionally technology companies like Microsoft sell software and traditionally their customers have been enterprises that would wait and wait and wait before adopting the software and allowing it to be installed and used in their data centers.  That period of time from announcement to deployment has been long enough that it has made the traditional marketing method superior.  We are surely moving away from this world.  In the world of cloud computing and services, there is no deployment.  We live in the world of instant gratification.   And in this new world, Google’s approach is the winner’s approach.  There is also one other difference that needs mention.  App Engine is not particularly targeted to the enterprise.  The business decision maker that chooses the Google solution is much closer to the developer than the person who decides between SQL Server and Oracle.  The enterprise business decision maker is more likely to be at the big conference, and hear about the release at the keynote, and then create an action item for his engineers to play with and evaluate the new technology.  Therefore, the traditional method still may have advantages.

Having said the above, every platform company should still look closely at what Google is doing and take it one step further.  Put as much as possible of your marketing dollar in training your customers, and enabling outside developers to build their knowledge and expertise of the new platform quickly so they can build great products on it.  Then and only then will your platform be truly successful, go viral and become the envy of the industry.


Silicon Valley Code Camp is this weekend

While I am sitting far and away from Silicon Valley, I will be watching as the second Silicon Valley Code Camp happens this weekend. I want to tell all the developers, coders, architects, hackers, or whatever techie names they want to call themselves who live in or near the Valley how lucky they are to have such a great event there. Some of my favorite techies will be speaking at this event. People like Douglas Crockford, Juval Lowey, and Matt Mullenweg will be taking time to share their knowledge and experiences with the rest of us and thanks to the hard work of folks like Peter Kellner who have spent countless hours organizing this event, it will all be for free. Believe me, people from other parts of the country or the world do not have this same luxury to drive a few minutes from their home and listen, learn, and share with such a powerful group of software engineers and pioneers involved in such a divers array of technologies. Fortunately, the word has gotten around and over 700 people have registered. Unfortunately, many of those who register will not show up. Mainly because registration is free and the barrier to entry is nothing. So at the last minute, they decide to do something else or feel lazy or … I don’t really know why. All I know is that this is a great opportunity. People pay hundreds of dollars at conferences to see the same speakers give the same talks and folks in the Valley have a wonderful chance to take advantage of it for free this weekend. So don’t let this opportunity go by. If you have not registered, register now. If you have registered, set your alarm clock for Saturday morning and go down there. You are blessed with the opportunity to live in the valley and take advantage of this. Take full advantage of it.

I wish I was there.

The tools folks said they used at Conference

At the conference Someone put a sheet on the wall and folks filled it out with:

here is a flickr link to the photo of the first page and here is what was on the sheet:

Update:  AltNetPedia has this list in much better order now.

What tools do you use?
Visual Studio!
Big Witeboard Wall!
Rhino Mocks
MS Test
Team Build
Mac Book
Structure Map
Textmate for C# (Really!)
Active Record Migrations
NAntMy Generation
Visual SVN
Socket Wrench

Index Cards
Baseball Bat
Media Wiki
Synergy (network KVM)
Tortoise SVN
SQL Server
Acrobat Connect
SQL Diff
CI Factory
SQL Compare
XML Doc Viewer
Lots of e
Cross Loop

Microsoft announces its new MVC architecture for Web Apps

Today at the conference, Scott Guthrie demoed the new MVC architecture that Microsoft will be releasing in Spring 2008 for web apps. The first CTP should be available in two weeks. This architecture is very similar in many ways to the Rails architecture but takes full advantage of Microsoft .NET 3.5’s features and the strong typing in .NET. The crowed of alpha geeks that where incredibly critical of Microsoft the night before all gathered in one room and intently listened. Many questions were asked: Does this framework work with such and such? Can I do so and so. Scott’s answer was yes to all of these questions. The crowed was enchanted by Guthrie. No one had anything negative to say. There were a few syntactic and minor suggestions. And some mental wresting from some of the geeks, but Scott’s technical answers addressed the issues raised. Everyone was incredibly impressed. Scott’s presentation and rapid fire answers to questions demonstrated his detailed understanding of all the testing frameworks as well as alternative development frameworks out there and his team’s synthesis of all this knowledge in what appears to be a superior product to what currently exists in the market.
This will be a MVC pattern similar to Rails with a similar URL mapping convention and an architecture that allows you to plug in your favorite testing tools. Both Scott Hanselman and Philip Wheat taped the talk and will post it shortly. I strongly recommend watching it. This architecture is far superior in separation of concerns, testability, maintainability, and scalability to the existing ASP.NET architecture that was basically mimicking a state-full WinForms environment in a stateless web world to bring existing WinForms developers up to speed with web application development quickly. It will enter a heated battle with Ruby on Rails for the top spot as the best way to develop modern web apps. The Microsoft .NET Framework will have certain advantages such as WCF, Linq, and strong typing while the dynamic nature of Ruby and it’s faster innovation rates due to its open source nature will have other advantages. It will be interesting to see how this fight will pan out.

Note that because of a fundamental change in the design, there will be a new (smaller) set of ASP.NET controls that will work in this model. This architecture relies more on the native html controls (which is a good thing. See my CSS blog post to see what hoops you need to jump through to make ASP.NET controls work well with CSS). AJAX Control Toolkit controls that talk to the server also will get counter parts that will work in this model. There will be no change to the Microsoft Ajax Library or the networking stack of the Microsoft Ajax offering. This stack will also improve the existing ASP.NET architecture by replacing the UpdatePanel that was designed to wrap existing ASP.NET controls which were not originally designed for Ajax with a control that can be passed into the app as a JSON object and placed in a placeholder.

To read other perspectives please read the following blogs:

Bob Grommes, Chris Holmes, Howard Dierking, Jeffery Palermo, Jason Meridth, Joshua Flanagan Mike Moore, Roy Osherove,

Update: Sergio Pereira has written nice blog post that goes in more detail with sample code.

topics at

jay flowers: loop diagrams from system thinking

jeffery palermo: advanced nhybernate techniques

paul juliean : different styles of pair programming

mvc stuff and plugging the dlr into that ruby view,

can we call it msft does rails

ndunit and xunit

ddd domain driven design

scott belware behavior driven design

rod how to sell agile to management

making tests pretty

eric anderson how to introduce bdd to developers who are not actively seeking better ways to do that. how to lower barrrier to writing specs

passion, what to do to build that passion

what is going on with architecture, what you have learned about

unit testing

futurespective on msft. give msft ideas on where to go.



what we lack in .net community that they have in ruby and java community

scott gu new mvc pattern from msft. use nunit to test it.

simon guest guidance or lack of from msdn. how to fix or replace it

westin benford monorailmoving from to monorail. why would someone spend 6 months on monorail and then move to ror

dynamic languages on the cl

aspect oriented programming

why move from tdd to bdd.

how to move organizational skill up

kevin d? how to move legacy code under test

jacob boris. how to avoid xml hell

howard turking. runs msdn magazine (laughter which was not cool) how to systematically moving it up to the masses vs c++ hates the vb community.

moving a .net team to ror. tips tricks

it is harder to build software this way how to make it easier.

intersection of the domain moder pattern and rich internet app built on silverlight

dave ohara. how do we take these ideas and sell them to folks in a way that they see the value.

tom integration tests involving databases. i am fan of nhybernate and active record. (use sql light with database in xaml -joke)

lightning talk for 5 min. to do quick demos, …

roy: a famous speaker said that td will deteriorate your design, can it really do that. when to use it or not. how it compares with bdd.

mike from uk you are all a freindly bunch… i am a java manager now. all alpha geeks have left as martin has already left. apple is taking over the desktop. is vista is the last nail in the coffin. why should i care about msft anymore.

vista ME will be out in just 2 years.

where does a model go, what is the lifespan. when to use mockin, when not.

agile project management.

scott: writing and understanding user stories.

jean paul — becoming a catalyst for change in your organization. how to introduce things like agile into the organization

james kovacs — why are we facinated with executable xml. it is terribly verbose. painful.. can we do better. most msft devs diddnet go to college.

ruby for dummies, i mean .net developers.

testing guis.

fostering passion within a company to grow.

are executable requirements possible. are … better. can we do better.

domain specific languages for business and geeks.

language oriented progamming is challenging. design asthetics and environment is challengeing with mocking and dependency injection.

what is the persona for .net. mort, einstien, elvis, belware

sorry for misspelling everyone’s names.

Back to the future with source code viewing with Visual Studio 2008

Posted in .NET, ASP.NET, engineering, Microsoft, programming, software, Technology and Software by nimad on October 4, 2007

Today Microsoft announced that it will be releasing many of the .NET Framework libraries under the Microsoft Reference License (Ms-RL).   Scott Guthrie’s post details what this means.  We have been able to see this source code using Reflector for a number of years so while getting the code in one big chunk is nicer (and now properly licensed) it is not that big of a deal.   A feature that has been lacking since the days of MFC is the ability to step into Microsoft source code in the Visual Studio debugger from your own code.  This was a great feature in MFC and I among others had asked Microsoft’s product team for it in 2005.  Today, my wish has been granted.  Starting with VS2008 you can actually step into Microsoft source code from your own code.  This will help developers everywhere better understand how Microsoft code works and write their code better.  It also puts Microsoft source code more in the spotlight and I hope this visibility will cause Microsoft developers to write better code.

My new wish is that future pieces of source code released in this manner should have the signature or alias of the developer who wrote it so if they did a poor job, the whole world would know.  While this wish coming true is very unlikely for many many reasons, I thought I put it down in writing none the less.

The curse and the gift of BarCampBolck

As I am writing this, BarCampBlock is starting in Palo Alto. I will be attending remotely from the East Coast and dearly miss my friends and colleagues who will be there.

BarCamp started two years ago as an ad-hoc gathering of technologists mainly interested in the web. BarCamp is free and open to everyone. It is also a un-conference and very loosely structured. Over the last two years, with the explosion of bubble 2.0 and the rise in popularity, stature, and influence of BarCamp’s two main promoters, Tara Hunt and Chris Messina, BarCamp has become a focal point of the Web2.0 community. Fortunately or unfortunately, human nature, particularly in the Western European practice of human nature, requires one to always out do oneself. So Tara and Chris came up with the brilliant idea of holding BarCamp’s second anniversary event as a block party. For a block party to be successful, you need lots of people. For an un-conference to be successful, you need at the very most 250 people (see Tim O’Reilly comment here). However, the human need to out do yourself and to celebrate success in the grandest way possible is always tugging at you as you make your decisions. So Tara and Chris went on doing what they do very well, promoting and promoting BarCampBlock. With blog posts from TechCrunch and Robert Scoble, it was obvious that BarCampBlock will be huge. And it is, over 900 people are coming to BarCampBlock! The question that will be answered over the next two days is how effective an un-conference will this be? No doubt it will be lots of fun. But will the connections, relations, and collaborations that come out of smaller un-conferences happen at BarCampBlock? While I am sitting some 3,000 miles away, I am eager to find out how this new direction for BarCamp will play out.

help find Jim Gray

Posted in Technology and Software by nimad on February 4, 2007

If you’d like to help, go to  Thanks to Amazon and all those who set it up.

Who is Jim Gray?

He is a great guy.  Smart, kind, and generous.  If you have a few minutes, please help with the search. 


SF Vista Launch event is NOT sold out

Posted in Technology and Software by nimad on January 11, 2007

Some of you have been pinging me and telling me that the SF launch event sold out.  There are 3 tracks with 3 registration links.  The Business Decision Maker track is sold out.  Mainly because it was the top one listed and everyone just clicked it.  The developer and IT professional tracks are still open.  Please register via: there, if you want to sit in another track than the one you signed up for, feel free to do so. 

Windows Home Server is coming

Posted in Technology and Software by nimad on January 8, 2007

Our homes have become more and more sophisticated hubs of technology over the last several years.  We each have at least one computer, the kids have computers, in some advanced households even the dogs have computers.  There is dedicated Windows Media boxes connected to the TV, and …  However, there has been no server solution to connect all of these and help us manage all the information that sits on these disparate islands.  For the last two years, Microsoft has been working on a project codenamed Q, and now that Bill Gates announced it at CES, I can talk about it.  We have chosen a boring retail name in line with our fine tradition of choosing very boring long names to replace cool internal code names.  Now we call it “Windows Home Server.”  The product is currently in beta2.   
  With Q, sorry Windows Home Server, losing data on your home computers because of hard drive crashes will be a thing of the past.  Home Server automatically backs up your other computers and your shared folders on the Home Server.  You can restore an individual file or folder, or an entire computer easily.  It employs a revolutionary new storage technology that easily allows you to add both internal and external (USB or FireWire) hard drives of any size to your server and the technology to automatically replicate folders across multiple hard drives. It is also easy to remove older hard drives, the Home Server copies their content to other drives.  It even comes with predefined shared folders for photos, music, and videos which can be enabled for media streaming from the Windows Home Server Console.  Any digital media receiver attached to the home network like Xbox 360 can now access this media., … The Home Server also monitors 1) your backups to make sure they are successful and up to date, 2) The success of replication of folders across multiple drives, 3) and it gathers the data from multiple Vista Security Center Status consoles so you can see the security situation for all your computers from a central point.  Using a web browser, a user can now access the home server remotely and upload or download files, or run applications on your home computers.  You access the Home Server by registering a free internet address (<yourname>.HomeServer.Com). Home Server was designed from the beginning to be easy to use and to be fast.  All of this is the tip of the iceberg.  Windows Home Server has extensibility endpoint that allow software developers to build add-ons, such as home web cameras, family information management software, home automation and home security solutions that work with your home server.  The greatness and usefulness of this platform will ultimately be decided by the partners and independent developers that will innovate on it in the coming months and years.