Google says to do it using Robots txt.
Use robots.txt to prevent crawling of search results pages or other auto-generated pages that don’t add much value for users coming from search engines. Google Webmaster Help
Perhaps it comes down to how much value these pages add to your visitors if they land on these pages from search results. I see many websites (especially big sites) that allow their internal search results to be spidered by Googlebot. Of course, if you follow Google’s advice and if people are linking to your search results pages too, and you robots.txt them out, you might be losing out on incoming link equity. Andy beard wrote about something similar some time ago- – SEO Linking Gotchas Even the Pros Make.
I have used various methods to manage internal search engine results pages over the last few years, but it annoys to see bigger brands ignore this Google guideline (where they benefit). Of course, they could well be screwing themselves in other ways with regards to ignoring this directive and allowing Google to crawl and return these internal serps.
I am no expert in this area. Do you prevent Google indexing the internal search results of your website and how do you do it?
My Twitter buddy Edward Lewis pointed me to this site for more information on Robots Meta Tags if you want to get deep :)
I can tell you I do NOT prevent Google crawling my search pages, but I don’t encourage to Google to crawl them, and I do NOINDEX search results pages to minimise duplicate content.