Friday, December 19, 2008

IMPORTANT NOTICE - Read Me First - Before you pay for SEO (Search Engine Optimization) Services!!

If you are about to hire a Search Engine Optimization contractor, spend a few moments reading this - it could save you thousands of dollars.
 
Not long ago I wrote a little article about How to choose a contractor for Search Engine Optimization work. Little did I realize the vital importance of this information for the world. 
 
I have since seen quite a few desperate forum pleas by people who have been ripped off by "SEO" contractors. The reason they got ripped off is that they do not know what Search Engine Optimization is. Don't buy on automobile unless you know what automobiles do, and don't pay for "SEO" unless you know what it is supposed to be and what the firm is going to do for you and why. If you understand a bit of what it is about, you will not be ripped off as easily and you will also be able to get the most of any search engine optimization service.
 
The short version:
 
Search Engine Optimization is a bunch of techniques for making your website more visible in unpaid search engine listings. These techniques involve

On Page OptimizationKeywords and coding and content on a page.

Off Page Optimization - This usually refers to Links from other Websites.

Website SEO design - Link structure in your site.

A longer version: SEO Basics and The SEO Mini-Book. You should at least understand the basics before hiring an SEO contractor.

 
What  Search Engine Optimization is NOT:
 
Paid advertisement in search engine results pages or other Websites - Not SEO and you do not need a firm to do that for you.
 
Black Hat SEO - Practices that can get your site banned from search engines. If someone wants to do anything like that or says "this might get your site banned," don't do it.
 
Web page graphic design - Changing the outward appearance of Web pages in itself may have no effect on search engine placement or number of visitors to your site. It might affect conversion rates, which are peripherally related to Search Engine Optimization.
Choosing a Search Engine Optimization Firm
 
Make sure you understand exactly what they intend to do and why - what are you paying for. That's elementary. This should be defined both in terms of operations they will perform and of results they expect to achieve.
 
Operational
Are they going to do work or just tell you how to fix the site?
Are they going to change code on your Web pages?
Are they going to change text on your Web pages?
Are they going to add new Web pages to your site?
Are they going to add new links to your Web pages from oustide?
What key words and phrases will they optimize for your site?
 
Check Before You Buy
Ask to see sites that this SEO firm has done. Do they get good rankings in Google for popular keywords?
 
Do pages have meaningful file names like nice-widgets.htm. Long and meaningless file names and URLs like outasite.com/products/cat/0124sdsfag12893435576fz are a sure sign that there is no search engine optimization here.
 
If they are generating the new site, is it making "dynamic" pages (based on a database and generated on the fly) or static html (actual html files) ?? Static html is better.
 
Does every page link back to the main page using a keyword that people will search for?? ("Home" is not a keyword unless you are selling homes).
 
Check the source code (in any browser, click View-->Page Source). Do images have  "alt" tags with keywords? Is the <TITLE> tag in the header a keyword? Is there a site map for the site?
 
Talk to Webmasters who worked with these people. Did they get more visitors? Check the Web sites in Alexa (http://www.alexa.com/data/details/traffic_details/)  to see if there was really a recent improvement in site rank for traffic.
 
 
Results
Do they promise top spots in Google or other search engines for your pages for different keywords?
Do they promise to increase the  Google PageRank  of your site?  
Do they promise a percentage increase in number of visitors??
 
Beware
Changing names and files - Firms that want to change the domain name or filenames of your site generally do not know what they are doing. Changing your domain will reduce traffic if you have any visitors now. If the name MUST be changed, the old pages and domain should remain and should be redirected to the new ones.
 
Java and Flash - Javascript is a turn off for search engines. You do not want pages with Javascript menus in them or lots of fancy javascript for display graphics or Flash immages. Google is only now learning how to deal with Flash images.
 
Meaningless promises - Top 10 spots for keyword in Google - If the keyword is Sexonomonia, it won't help you because nobody is searching for it. If the keywords is "Maps" they cannot do it for you - too many other good Web sites rank ahead of yours. Be sure you understand what they are promising and why it is good for your site!  
 

Thursday, December 18, 2008

Registering Web pages in search engines - a better way

Here's my year end nondenominational Christmas, Hanukkah, Kwanza and Eid el Fitr wish list from  Google. It's simple:
 
1- Don't penalize me for Google bugs. I really cannot control whether someone links to http://seo.yu-hu.com/ or http://www.seo.yu-hu.com/ or http://seo.yu-hu.com/index.html or http://www.seo.yu-hu.com/index.html. If your spider and software are too dumb to understand they are the same pages, it is not my fault.
 
2- There is really no difference between a page that is in a subdirectory and one that is not. Really! If you are going to penalize pages that are in subdirectories, or index them later, you ought to be telling people that that's how the spider works. If you like flat directories, we will give you flat directories. Just ask!  
 
3- Most important - Give us all a quick and easy way to tell you when I have 1 or more pages or to tell you to index the whole site. There are a dozen good technical ways to solve that problem. The XML Site map is one of the bad ones. If you are going to insist on those maps, then provide a free tool that will crawl the site and submit the URLs in any format you like. But in addition to that, please give me an easy way to tell you that there is one new page at the site. I should not have to make a whole site map for a single page.  For bloggers it's easy. There is an RSS syndication file and that file can be sent to an aggregator. You probably use information from aggregators and blogs like Blogline. But what about sites that do not have an RSS feed? Why isn't there a simple interface for submitting a single URL to a queue? It could be used for new pages or changed pages. That could also take a load off Google software, since widespread use of such interfaces reduces the need for frequent spider crawls through thousands of pages to find just one that is really new or changed. It is incomprehensible why registration of new pages has to be such a hassle when there are simple and foolproof technologies available to solve this trivial problem.
 
Is that too much ask?
 
Anyone out there who sees this cry of despair - post it to Matt Cutt's blog and maybe Santa will answer our prayers.
 
Ami Isseroff
 
 
 

Monday, December 8, 2008

Less Website traffic on Weekends and in Summer

A number of people have complained in forums about the drop in Web site traffic in summer and weekends. It is certainly a fact. There are not a lot of data about this, but there is at least one published set of yearly statistics showing there is a trough every August. I could not find any data about weekends, so I posted some of my own. My own Web sites are more affected than most by summer traffic depression and perhaps by weekend depression as well. Traffic in May is about 1.8 times that in August.
 
Here's the article with the graphs: Weekend and Summer Web traffic depression 
 
If anyone has any suggestions about what ought to be done about it, or ideas about long term trends, this is a good place to discuss that.
 
 
 

Sunday, December 7, 2008

Add sense: Google Adsense in a recession

If you have Google  AdSense ads in  your Website then depending on the category of your site, you may have noticed that CTR (click through rate) dropped for a while - quite a bit, and then seems to have recovered. Less people may be clicking ads in a recession - they aren't buying. And Google may have less ads in stock for that category in each location, so they were putting up multiple copies of the same advertisement or public service ads. Of course, multiple copies of the same ad are not as effective as different advertisements that attract different clientele.
 
Google seems to have fixed that a bit by being a bit more broad minded about what ads they will match to what keywords, or in some other way, because CTR has improved slightly. At the same time, they are trying to make up for low CTR in Webpage ads apparently by generating more ads on search pages. Search pages seem to have a much higher CTR than website pages. Google may be trying to improve their numbers for the next quarter earnings report. In any case, Google is number 1 with about 80% share of advertising revenue. (See Google's Ads Per Keyword Output In High Gear)

Troubleshooting Search Engine Optimization - back to basics

While many people get carried away by Web 2.0 and twitter and arcane marketing strategies, they often neglect the basics. Then they don't understand why their Web site/page is not listed. Optimization "experts" ask in forums why their customer's page isn't doing well. Usually it is because they neglected simple stuff or simply ignored all the SEO Basics.   If your Web page/site is not doing well in Google and not attracting vistors, check the simple minded basic things first:
 
1. Is the page or site listed in Google at all? Doh.
 
2. Links -  Is the page/site  linked from a main page in your Web site, one that has high Google PageRank ?? Is it linked from other Web sites and Directories?
 
3. Does it use your  Keywords in all the important places?  Remember, the search engine spider is a dumb machine. If your page is about Widgets, you have to tell the spider it is about widgets in language that it understands. If you are writing about Widgets in a blog, does the blog article title say "Widgets?" Or is it a fancy title that nobody can associate with your real topic, like "To be or not to be?"
 
4. If you can include some keywords in the filename of the page that can help.
 
5. Did you do the drudgework of filling out the Title tag and the Description Meta tag and the Keywords meta tag in the Head Section ? Is there a clear <H1> title in the page itself (only one) ? Are there a few pages (at least 1) linked to that page from outside your Web site, using the keywords of that page as the anchor text? If you linked to the page with the Anchor Text  "More" or "Read about about it Here" the Google spider is very linkely to classify the page under "more" or "read" - you need to use the title text. The search engine will also conclude that your page is about "more" - no kidding, I saw this happen.  If the Title tag  (<TITLE>This is the Title</TITLE>  in the  Head Section  of the page is blank or says "New Page," you can't expect the search engines to do very well at figuring out what it was about.
 
6. Does the Body  TEXT of your page use the Keyword a lot? If your page is supposed to be about widgets, but all the text is about Brittney Spears, search engines can't know that it is about widgets.
 
7. Did you use alt tags  in your pictures of widgets or whatever your site is about to tell search engines and people using text browser (if there are any) and impaired people that these are all pictures of widgets?
 
8. Did you use Title attributes in the links so that search engines can "see" that these are links to articles about widgets. It is not your fault if someone else named their article about widgets "What is do be done?" or "A very excellent solution."  But if you have too many links with that sort of text, the search engines will think your page is about Shakespeare of the New Testament.
 
9. Did you check the code to make sure that the tags for the Body and Head Section are correct? If there is no closing </head> tag or no opening <body> tag the browser might show the page just fine, but the search engines can't tell that what is there is text you want people to read. Likewise, there can be other junk that the search engine spiders cannot see.
 
10. Is the page mostly text that search engines like, or is it filled with Javascript and Flash jibberish and CSS style directives that should be in a separate file? Is the code clean, or does it have a lot of junk in it that is produced by Wysiwyg editors like <Span style = "style1"></span> and
<FONT size = "2" Face = "ARIAL" Color = "Red"> </FONT> <FONT size = "3" Face = "ARIAL" Color = "Blue"></FONT></FONT>. Wysiwyg editors will fill whole Websites with that sort of meaningless flotsam if you don't edit it out. Search engines see that and lower your score.
 
11. Did you link to the main page from every page of your Web site, using an Absolute Path and proper keywords in the Anchor Text. If your home page is about Widgets, the anchor text must say something about widgets, not "Home." if you link to the main page with the anchor text "home" then you may expect to find that page when you search for "Home" in Google.
 
Nine times out of ten, Web page search engine visibility can be improved greatly just by checking and fixing those  things. And yes, you can go through every page that I or anyone else makes and find ways to improve that page, especially if it has been around for a few years and technology changed.
 
Beyond that, there are always unknown factors and gotchas, the Job factor in SEO. Remember Job from the Bible? You can do everything right and still the Gods do not smile. Remember that though not every page can be linked from the highest ranking page on the site (usually the main page), pages in the lower level sections may take much longer to become visible in search engines. Using nice orderly separate subdirectories seems to hurt even worse, though it should not. Sometimes these subsections cannot be avoided. Make up for it by linking to sub pages from articles and from auxiliary Web logs and site.
 
If all else fails and the pages won't get listed no matter what you do, submit a Site map to Google Webmaster central. It won't guarantee high placement, or even registration, but at least Google will consider the page.  
 
Ami Isseroff
 
 
 
 

Wednesday, December 3, 2008

Showstopper Google bug?

All search engines are based on the concept of Authority of Web pages and Websites. The pages and sites that are deemed to have the highest authority are retried in SERPs (Search Engine Result Pages) at the very top. Google based its success on having the best measure of Website authority - the Google PageRank algorithm. The rationale for this intellectual and technical feat and the mechanism are descibed here: The PageRank Citation Ranking: Bringing Order to the Web.
 
Let's see how good it is. The best authority on the Web for the what it says in the Bible is a copy of the bible, no? Anyone who quotes from the Bible might make a mistake, but the Bible is infallible about the Bible, I would think.
 
Here is a quote from the King James Bible:
 
If then God so clothe the grass, which is to day in the field, and to morrow is
cast into the oven; how much more will he clothe you, O ye of little faith?
(Luke 12:28)
 
The phrase "O ye of little faith" appears in the book of Matthew as well. I searched for this phrase, in quotes in Google. The first references that were quotes from the Bible appeared in the eighth page of results! Items like Time magazine articles, song lyrics and an article in one of my Web sites accounted for the first 70 or so results. Google claims it has about 36,000 results for this phrase.
 
"To be, or not to be" is the overly famous quote from Shakespeare's Hamlet. When this is searched in Google, a lone result appears in the third place in my part of the world, behind two Wikipedia articles. It is not from a Web site with the whole play, just a fragment with the soliloquy. The next result that is really from Shakespeare's play appears around position 75 again. Is there a 70 penalty for having the original text (like the 30 Penalty)? Google has about 2.5 million pages with this phrase, or so it claims.
 
Part of the problem is that we used phrases that are extremely popular. I looked up "which is to day in the field"  in Google and indeed, the very first page retrieved was from the Bible. But it was the only one actually from the Bible on that page! There were about 25,000 such pages in Google.
 
I also tried a different phrase from the same Shakespeare soliloquy, "But that the dread of something after death" - not all that famous. Not a single one of the first 10 results was a link to the original Shakespeare text. The text of the "Tragedie of Hamlet" was first listed as result number 26! There were 23,700 results for this phrase.
 
If we cannot rely on "authority" to get search engines to deliver the authentic and authoritative origins of quotes at the top of search results, then it doesn't seem to be worth that much.
 
I had better luck with this line "somewhere i have never travelled,gladly" - it is the title of a poem by e.e. cummings, who is apparently not quite as famous as the Bible or Shakespeare, and therefore he is allowed to be more of an authority on his own work than William Shakespeare or the King James Bible. For this quote too, there were about 9,000 pages claimed by Google. The entire first page of results and more were filled with links to the poem itself.
 
For "How do I love thee? Let me count the ways," the start of Sonnet 43 from Elizabeth Barrett Browning's "Sonnets from the Portuguese" the first three entries retrieved by Google were the actual poem. That's fair enough. More than that would be useless. Someone might be searching for a different page. 
 
  
 
 
 
 
 
 

Phishing - Black Black Hat

Something a bit different this time. Phishing scams and the like are probably the ultimate in  Black Hat SEO and they are a real and costly danger to e-banking and any Web sites that allow financial transactions. (see Danger! Your identity is not secure). Phishing uses a variety of techniques to direct victims from a legitimate Web site to the scam Web site, where they enter their identification information, account numbers, social security, credit card numbers etc. that are then used by crooks to steal their money.  Other schemes download appropriate  Scumware to the victim's computer. This captures their identification details and sends them to the crooks who operate these schemes.
 
Scientific American has a useful article on How to foil Phishing Scams, However, it is only useful up to a point. Most people refuse to become technically educated, and if they do, the scammers will just find new techniques to beat the system.
 
Banking and other institutions have adopted various security measures such as token devices, but these are difficult to manage for various reasons. Identiwall has a promising system based on multiple authentication using cell-phones. It can be applied for Secure ebanking, online stock brokerages and any other type of Web operation, financial or otherwise, that requires confidentiality.
 
The ultimate phishing scheme is one that can steal a legitimate Web site and redirect visitors through Cloaking, Shadow domains and similar techniques. Visitors think they are giving their identification information to the bank or stock broker, but the crook is at the other end. In principle, it is not morally or technically different from Black Hat SEO.
 
 

Monday, December 1, 2008

The nemesis of search engines

The logic of dialectics dictates that everything carries within itself the seeds of its own destruction. The basis of Web search engines is  Website Authority and Web page authority, determined by age and incoming links. That is also the basis for  The killer search engine bug. That's the short version. If you haven't figured it out: Older is not necessary better. An older description (say 1960) of how a computer works and what it can do, is not better than a new one. But the authority algorithm favors older pages. Bigger is not necessarily better either. Apple vs IBM mainframes. But authority algorithms favor the bigger Web sites. See The killer search engine bug for some of the gory details.
 
 

Sunday, November 30, 2008

How to choose and not choose a Search Engine Optimization Consultant

I saw a Website that raised the issue of how to choose a Search Engine Optimization firm. It looked very promising, but in fact, I didn't find it particularly helpful. So I wrote an article of my own. The meat of that article is 11 things you must know before choosing an SEO firm. Why 11? Because it wouldn't fit in 10.  The most important things:
1- Understand when a Search Engine Optimization Firm can't help
2- Understand what things you need to do yourself.
3- Make sure you know exactly what SEO is
4. Make sure you know exactly what this SEO consultant is going to do
 
The rest and some comments on that other site are here Selecting a Search Engine Optimization Firm
 
 

Conversion and Landing Pages

In writing about Landing Pages, I studied several articles that Google found for this search phrase. Remarkably, almost all of them were really about Conversion Rate! A Landing page, for those who do not know, is the page that is meant to be an an entry way into the Web site. It is the "bait" in commercial Web sites. Conversion rate is the measure of how many suckers visitors go from this landing page to the page where they sign up, buy something or do whatever it is you want them to do.  In this antiquated Website model, there are no search engines out there. All your traffic comes from paid advertisements or emailing, and therefore you know where the visitor is going to land and how they got there. So all the advice about landing pages, not surprisingly, is about conversion rate, which, when you think about it, is about how to get people off the landing pages and n to other pages.
 
At the other extreme from the believers in the Landing Page model, are those who believe that "every page is a landing page." Theoretically it might be true in the age of search engines, since there are less "inner" pages - the old model of the hierarchical Web site with a single front page entry point is long dead. But literally it is absurd. Not every page is a landing page. A custom 404 error page is not meant to be a landing page. A "Thank you for filling out the form" page is not a landing page, a print article page is not intended as a landing page.
 
But...
 
But even in "organic SEO" it is impossible and a waste of effort to make every page in the Web site a top draw. Some pages are more equal than others and should get more effort and thought. They may be portals such as an entry to a product list or list of links, or they may be very well written and researched articles or pages with stunnng or interesting graphics. Sometimes a page becomes unexpectedly popular. There is generally a reason that is obvious after the fact (not always).
 
But in analyzing the statistics of any website you can see that there are "landing pages" in the sense that some pages get a disproportionate about of the entry traffic to the Web site, and these are not necessarily the ones you expected to be top pages. In a site with several thousand pages, only about 550 were entry pages in a given period. Among those, the top 20 accounted for about 58% of the traffic! That doesn't mean the site would have 57% of its current traffic if it had only those pages.. The less popular pages support the more popular ones by linking to them, adding to the pagerank of the site etc. But those 550 pages are the ones that are delivering the message to visitors of that site, and the first 20 remain about the same month after month. So you need to check site statistics and exploit the opportunities. If visitors are getting to a page that isn't really what you want them to see, you have to figure out how to exploit that and lead them to a page you do what them to see.
 
How to get people to a landing page is the topic not covered in articles about landing pages. The answer is the same answer as for all search engine optimization, because SEO is optimization of landing pages:
 

On Page OptimizationKeywords and coding and content on a page.

Off Page Optimization - This usually refers to Links from other Websites.

Website SEO design - Link structure in your site, Type of software used to create your Web pages, size of your site...

 
As for improving Conversion Rate, I put most of the important tips I found plus some of my own in the Conversion Rate article, along with a lot of useful links. There are some important recommendations that can help you, some obvious bending of corners, and some very honest advice that says, "These factors can influence search rate. However, they work differently for different products and situations. Therefore, you have to try different ideas and see what works (A/B or multivariate testing).
 
A big problem you always have to deal with is that the things that make for good conversion rate often make for lousy Web pages from the standpoint of search engine optimization and user navigation. The ideal landing page has a fairly ugly and prominent message, like
 
"Buy superwidgets and gain immortality,
 riches and unlimited sex - Guaranteed or your money back"
.
And it has a single button or repeated button or link:
 
Don't Wait!
Click here to Buy Superwidgets NOW!
Hurry while they last!
 
It doesn't have a lot of Long tail  content for search engines to glom on to and it doesn't have navigation links and fancy doodads.
 
I trust you will find a lot of things to think about and use in those two articles about  Conversion Rate, and Landing Page and some ironic laughs, like the SEO "expert" who had to hire someone else to fix their landing page, and the breakthrough conversion recommendation that resulted in a huge increast in conversion rate but not a single new sale. How could that be? Think about it...
 
Meanwhile...
 
Don't Wait!
 
 
Ami isseroff

Monday, November 24, 2008

Conversion and bounce metrics - Sense, Nonesense and SEO

Conversion Rate and Bounce Rate are SEO buzzwords that are getting bounced around a lot lately. If you have a low conversion rate or a high bounce rate, you should worry. It's like having too much bad cholesterol or not enough good cholesterol. Conversion rate is directly applicable to commercial online store Web sites and somewhat less applicable to other websites. It measures what proportion of your visitors actually bought something or did whatever it was you want them to do. Bounce rate is supposed to measure how many visitors got to your web site and said "this is of no interest" and left. In actual practice, there are two pretty iffy ways of measuring Bounce Rate. The first measures the time that visitors spend on a single page. Obviously, if they are only spending a few seconds on a page and leaving the site, they might be uninterested in your site, or else they might be robots or they might be people who get to your page and find what they want and leave, or people who click an external link or another link in your site. It all depends you see. Of course, a visitor might get to a page, decide it is worthless and go get a cup of coffee or read e-mail. So pages that are open for a long time don't always mean what you think.  The other measure of bounce rate is how many people get to your site, view only one page and skedaddle. They might not like what they see there, or they might have gotten exactly the information they need and they are out of there.
 
Suppose you have an informational Web site. If I want to look up Abraham Lincoln's date of death, I should be able to get to that page in your site from Google, get my information and be gone. I'm happy. If I have to go to the main page of your site, click on "Famous People" click on "Presidents of the "US" and then click on "Abraham Lincoln," your site design is terrible and your optimization is worse. Your visitors are unhappy, but Alexa and similar tools will show that you have a lot of "depth" and a low "bounce rate" because each visitor generates 4 pageviews.
 
Moral of the story: Make sure you know how someone is defining a term, and how they are defining the term before continuing. If they haven't defined it, then what they have to say is often not interesting or generates more confusion than understanding. When evaluating claims about how search engines rank pages, consider the real world problem that search engines have to solve for all kinds of websites, not just commercial sites that are the model that most seo "experts" have in mind. Not everyone is ebay or Amazon.  If it is really true that search engines like Google are using Bounce Rate (in the sense of depth of visits) as an indiscriiminate criteria for ranking all websites, they are making a mistake. It might be useful for sites of certain kinds, but less useful for sites like Wikipedia, where people may spend 20 seconds on a single page getting a single fact they needed. I didn't see any dip in Wikipedia rankings, did you?  
 
Ami Isseroff
 

Sunday, November 23, 2008

Reports of Search Rank Death Greatly Exaggerated

No, Matt Cutts didn't say rank was dead. He didn't even come close, but if you want to believe that very much, you might decide that is what he said. Nobody ever explains where by "rank" they mean Google page rank or rank (position) in a list of search results. Cutts seems to have discussed the latter. But trust me, you will need both to get traffic. Without being ranked high, you get no visitors. With no vistors, a conversion of 5% is still no sales, even if it is "better" than 1%.
 

What's the difference Between SearchWiki and Search Wika and what's it all about?

What's the difference Between SearchWiki and  Search Wikia  and what's it all about? Read about Personalized Search. Is Personalized search going to mean the End of SEO and LIFE AS WE KNOW IT? (LAWKI). Is GOD or Matt Cutts planning the End of Days. "For these are latter days we know."  I don't think so. Life will go on and so will SEO. Read why here:   Personalized Search: End of SEO? Rank? Life as we Know it? Is SEO dead or is it alive and well. Maybe it is just hiding in Argentina or Brazil.

Ami Isseroff

 

Monday, November 10, 2008

Is there anything better than Google ??

Google is great but not perfect. I am not going to list the problems here, but here is one try at a solution - there are others out there. I don't like this particular solution, because it somehow uses the preferences of my social network as a way of ordering my search results. If I just use the information everyone else has, I am creating an incorrect virtual reality for myself. Besides, what if I am researching a topic the people in my social networking know nothing about?
 
 
 TECHNOLOGY
 
 
 November 03, 2008
 
You're not your mom, or your dad, grandma, or great aunt. Your interests and experiences are different. Your world is different. What you're looking for in life - and on the Internet - is not the same as what they're looking for. Yet when you plug a term into a major Internet search engine, you get the same results your grandma does. And it's up to you to sort out the results, to find the information you really need.
 
Unless you use the new Israel-developed Delver search engine, says Delver CEO Liad Agmon. "Instead of giving you the general search results you get with Google, which you then have to ferret through, Delver gives you tailor-made search results that are much more likely to give you the information you're looking for," he tells ISRAEL21c.
 
It's not that Herzliya-based Delver has developed a mind-reading computer that can figure out what you're really after when conducting an Internet search. Instead, Delver leverages your web presence - using the information it gleans from your social networking accounts, like Facebook and MySpace - to get a picture of your online personality, the better to rank search results in a manner that makes sense to you.
 
One search doesn't fit all
 
It's certainly a different approach from the one-search-fits-all approach used by just about every other search engine. While Google does offer a number of tricks that lets you narrow down the results you get back for a query - such as structuring a search using specific syntax - most users aren't going to use search terms like "intitle" or "inurl" to narrow down their results. And without heavy use of modifiers, Google will often send back tens of thousands of results in basic and general searches.
 
To help you get the information you're after, Delver relies on a heretofore-untapped source to figure out what you're most interested in - your friends. When you sign up for Delver, you list the social networks you're a member of, and when you conduct a search, Delver returns the results based on what the people you associate online like and know.
 
This works great, says Agmon, when you're seeking information that contacts - or contacts of your contacts - have information about. "I recently had to take a business trip to a new city and was looking for a good hotel," Agmon says. "By using Delver, I was able to get details of the experiences of other travelers to this city, and much more easily make an intelligent decision on which hotel to stay at, with the best price." The more active you are in social networks (Delver supports all of the more popular ones), the more information you can mine from Delver.
 
Using other search engines, "an elderly person in Colorado and a teenager in Tel Aviv will get the exact same results when doing searching, for example, for 'London shows,'" says Agmon. "But it's probably safe to say that both users are not looking for the same kinds of shows." Using the social network profiles either or both of them may have, Delver can give users more precise results, he says. "Delver isn't just search - it's qualified search, with the results vetted by your social network, ensuring you can more easily and quickly find the information you're most interested in."
 
Using social networks to deliver
 
All the information returned by Delver is already "out there" on the open Web, says Agmon - gathered by search agents that scour the Internet and index information, just like Google, Yahoo, and all the others do (Delver does not rely on Google for its results, and has its own search system, says Agmon).
 
"The only thing we are doing is qualifying the results based on a profile we build using your social network," which you give Delver access to. "So no private information is ever given out or even searched by Delver." And Delver isn't just indexing information - it can help you more easily find media, including music and video, geared to your tastes. "Eventually, we hope to be the search engine of choice for users," says Agmon.
 
Delver began a private beta in February, and went public in July - and while Agmon says the company could not release specific information on the number of users, he does say: "We've been quite surprised at how quickly Delver has taken off and how many people use it on a regular basis." He adds that the service has garnered significantly more traffic than company executives had expected at this stage.
 
This past July was significant not just for the public emergence of Delver, but also for the fact that it was the start of one of the most severe credit crunches in recent years, seemingly a bad time for startups like Delver to be seeking out money.
 
But Agmon is optimistic. If you've got the goods, investors will take a risk, because they believe their investment will pay off. "While getting VC money when everyone believes a recession is imminent is tougher, there are positive aspects to a downturn - it's when the nonsense ideas get ferreted out, leaving the winner ideas an open field," says Agmon. "This is the right time for companies with better ideas. Delver is in it for the long haul, and we've got a great idea - one that billions of people around the world could significantly benefit from."
 

Sunday, October 5, 2008

Cloaking or Cookie Stuffing or what? Is Google really pure as the driven snow?

It seems everyone knows everything better than I do. What do I know?
 
Someone commented that the blog about Cookie Stuffing is really about cloaking and that dear Google is innocent of any wrong doing as they are not responsible for content of advertisements. By that logic, Website owners should not be penalized for external broken links, since we can't check each link, and we shouldn't be penalized for backlinks from shady link farms, since we aren't responsible for what other people put on their websites.
 
But Google DOES hold us responsible for these things and Google IS responsible for the ads they run. They have to be.
And the company that advertises the scripts says it is doing Cookie Stuffing (not cloaking) and says they are doing  Black Hat SEO. Maybe they don't know what they are doing.
 
By the way, last year Yahoo! was caught cloaking in their automobile ads (see http://www.agerhart.com/seo-rankings/yahoo-caught-cloaking-will-they-ban-themselves/ ). If what the search engine spider sees is not what the user sees and what happens is not what the user intended, it is cloaking, not matter how or why it is done.
 
Cookie Stuffing usally has to include some cloaking mechanism (or call it something else - it does the same thing)  to ensure that the search engine sees an innocent page, but the user gets to the advertiser's page - even if the user doesn't know they got there. 
 
Ami Isseroff  

Google advertises Black Hat SEO software

Google prides itself on ethical business practices and Google Webmaster guidelines repeatedly warn against unfair  Black Hat SEO practices. But Google allows  AdSense advertisements for  Cookie Stuffing. Here's the ad, collected today Sunday, October 5, 2008, for a search for keyword [cookie stuffing]

Cookie stuffing is an unethical way to earn lots of money from  Affiliate advertisers. A user clicks a  Search Engine listing for (say) a supposed political Web site. But the site that the search engine spider saw is not what the surfer will see. Instead of getting information about Sarah Palin or Joe Biden, the page clicks on an affiliate firm's advertisement, and shows (for example) a page for purchasing a book about Joe Biden or Sarah Palin. The firm boasts that it brought nearly $9000 in affiliate revenue to a site that got 1,000 visitors a day. And - these are cautious folk who only send 10% of the visitors to the affiliate Web site.  A comfortable way to make a living - for Google too.

If Webmasters are not supposed to use Black Hat optimization tricks, should Google be making a living by advertising Black Hat software?

Ami Isseroff

 

 

 

 

 

 

Saturday, October 4, 2008

SEO in one easy lesson

All of Search Engine Optimization is divided into three parts (not two, as most "experts" would have you believe. The first two are:
 
On Page Optimization - Things you do to a Web page to make it more readable for Search Engines.
 
Off Page Optimization - Everyone tells you this means adding backlinks from other Web sites. They mean Off Site Optimization.
 
So now you've got your content and your backlinks. What is missing?
 
Website SEO design - That's all the things you do to Web site structure to make your site better, and it probably includes things like setting up newsletters and the like.
 
Read those three articles and you have a pretty good idea of the basics of SEO.
 
Ami Isseroff

Wikipedia introduces the "no-follow" tag

Wikipedia has introduced a rel = "nofollow" tag that is evidently used on all outbound links. This was evidently done to reduce link spam that is part of Black Hat SEO. However it is applied indiscriminately to all links, including those on which the articles are based, which makes it somewhat unethical. What it will mean is that the gigantic volunteer encyclopedia which Google lists as #1 for so many keywords will perpetuate and increase its web dominance at the expense of other Web sites. It means that nobody but Wikipedia can be number 1 for an informational keyword niche. At least in theory there could be an excellent article, much better than Wikipedia's for a given subject at a different Web site, but it will never exceed Wikipedia in the listings because no site is probably going to have more pages and authority than Wikipedia. This shows an important weakness of Google PageRank which bases authority and positioning on the number of backlinks

Sunday, September 28, 2008

Search quality: How good is Google?

Yesterday I wrote about Google Quality Rater Secrets - how Google uses humans to rate its search engine results. Improving the quality of search engine results was the aim of the original search engine and ranking algorithm described by the Google founders in The Anatomy of a Large-Scale Hypertextual Web Search Engine and The PageRank Citation Ranking: Bringing Order to the Web.

Google has brought search engine quality a long way since then. But I did a little experiment which shows that Google's results are, by its own standards, not nearly as good as they ought to be or could be.

More at  The Quality of Google Search Results

Saturday, September 27, 2008

Google quality rating and what it means

Google's philosophy was oriented to obtaining high quality search engine results from the beginning. This is evident from the original papers published by Page and Brin,  The Anatomy of a Large-Scale Hypertextual Web Search Engine and The PageRank Citation Ranking: Bringing Order to the Web,  High quality is defined by them, basically, as the results that the user would want to get. The various algorithms and tweaks, beginning with with  Google PageRank are all intended to provide the highest quality results.
 
Not surprisingly, Google maintains an army of quality human checkers who evaluate the results of searches. Enterprising bloggers found a confidential document that describes the rating criteria for Google quality raters and put it on the Web for a while. This provoked a lot of comment that was based on the mistaken notion that the criteria reflect what the Google algorithm actually does. That is not the case. At best, they reflect what Google wants its algorithm to do. In brief, Google wants the algorithm to carry out the intention of the surfer. Pages that are retrieved are rated for relevancy to the search query. No attempt is made to determine if the information in those pages is correct or reliable, other than the criterion of "authoritative" citations. If other people think it is right, or if the page cites "authorities" then it must be right, according to Google. But the important point is that the algorithm doesn't necessarily carry out the desires of the management. If you design a page according to the quality handbook, it will not necessarily get a high ranking in Google.
 
In fact, we do not know that the ratings are related in any way to position of the pages retrieved by Google. The raters don't either evidently. They are presented with a lot of information about a page retrieved for a qury, but that information doesn't include the position of the query in the result returned by Google or the pagerank of the page retrieved (though raters they can usually find both). There is no attempt, at least not by the raters, to determine if the page returned as #10 in the search is better or worse than the page returned as number 1, or if the first 10 pages are better or worse than the next 10 pages. 
 
For a detailed discussion of what the raters and Google are looking for, see:  Google Quality Rater Secrets.
 
Ami Isseroff

Friday, September 26, 2008

Visitor statistics - What do the numbers mean?

How many actual people come to your Web site and how can you know?
 
How many people actually visit that Web site where you want to advertise? Is it better or worse than the competition?
 
See Web Site Statistics Versus Truth and  Web site statistics and don't believe every number someone tells you until you have checked!
 

Tuesday, September 23, 2008

Care and feeding of Web sites

Web sites have to keep running in order to stay in the same place - because the web is growing, and because information, like Paris Hilton, is aging.
 
Once you get to the top, you need to keep tending your Web site if you do not want to lose visitors and Web site visibility.
 
Read all about Web site Maintenance
 
Ami Isseroff  
 

Don't let disaster strike your Web site

If you have an online business Web site, don't bother reading this. But if you have spent a lot of time building a news Web site, an online dictionary, a site with a lot of historical material or other informational Web site, please read on.
 
Not long ago a friend died at a relaively young age. He had spent 12 years pioneering use of the Internet, and built a Web site of about 20,000 pages. His widow had no way to maintain the site, and didn't even know how to renew the domain name. When the domain name expired, the man's work was simply vanished off the Internet. 
 
As a last resort for such Web sites, the www.archive.org Web site maintains a digital archive of Web sites. It is slow and awkward, but at least it saves the information in the sites. For some reason, a directive in the robots.txt file of this particular Web site intentionally or unintentionally excluded the archive.org spider, so even that is lost.
 
Even if one day the site and domain are restored, all the search engine visibility that was built up over 12 years by all the backlinks to this Web site will probably have vanished - it will be "Gone with the Wind." 
 
If you work and your Web site are precious to you, and you want others to see them even if you lose interest in the Web, go out of business or become unable to take care of the site for any reason, read about lapsed domains and vanishing Web sites - and how to protect your site : http://seo.yu-hu.com/Lapsed_Domains.html.
 
 

Monday, August 25, 2008

What makes a page or site rank well?

There is a very important article out in SEOMOZ  surveying what "expert" SEO practitioners opinion on what makes a page  or site rank well. It is mostly important to you if you have been listening to Voodoo practitioners, and there are a lot of them out there.
 
I will eventually get around to discussing each factor, but in general:
* The highest ranked factors are certainly valid for Google.
 
* What doesn't matter for Google at this point just doesn't matter. If someone tells you they get all their traffic from Yahoo and MSN search engines, it doesn't mean they are experts. It means there is something really wrong with their optimization strategy.
 
* From eyeballing it, anything given less than a rank of 3 out of 4 in that article is probably a waste of time and might be harmful. These get into the realm of Search Engine Optimization (SEO) Superstitions.
 
* What is basically a test of the expertise of the "experts" is given in one of the Pie charts at the end of the article. About 32% of the experts said there is no Google Sandbox (new sites are not ranked for a long time). Find out who they are and do not use their services. There is no Santa Claus and there is no Sandman, but there sure is a Google Sandbox.
 
* Another pie chart illustrates the naivete of the people making the survey. It asks which factor is most important for "Google Rankings" - It doesn't differentiate between overall ranking of a site and ranking of a page for a keyword. It asks if "Authority" of the domain is most important or Keywords on a page or backlinks (extermal links) to the page. Most people chose Authority (= Google PageRank) of the site. I would have to say "depends," though if pushed to the wall I would say external links to the page with the right anchor text based on empirical data. Of course, it depends how many links and who is your competition.   Wikipedia pages are always going to rank high. But it is hard to separate out the fact that such pages tend to get links and have high Keyword density from the authority of the site.
 
A relevant article that tries to refer to the above article is here. Unfortunately, while their article discuss link building "authoritatively," their link to the other article leads to a 404 error page - the URL is wrong. But it is still a good article, that discusses many of the errors people make, such as not linking to authoritative Web sites, avoiding links from 0 PageRank Web sites and so on. There are good reasons why people make those mistakes - they are all due to Search Engine Optimization (SEO) Superstitions, such as never link to a site with lower rank than yours, or do not link to sites unless they link to you (see SEO penalty for unreciprocated links).
 
Google quirks are best understood if you also understand the philosophy and approach of the inventors. What are Sergei Brin and Larry Page and their disciples trying to achieve and how do they think they will do it. That also tells you what Google is likely to do in the next year or two, as well as what they have done in the past. That is why I have included their original Stanford university accounts of pagerank and of the prototype Google search engine - with an extensive discussion
 

Tuesday, August 19, 2008

Can a Web page's position be sabotaged by irrelevant links?

The brief answer to the above question is evidently "No." The background, explanation and evidence are below.
 
The Google Search Engine Results Page (SERP) position of a Web page is determined in part by the number of links to it with a relevant  keyword. Thus, a page about Widgets that has 1,000 pages linking to it with keyword widgets will have a higher rank than a page with similar characteristics in all other respects, but which has only 5 links to it with keyword widgets in the  anchor text
 
A  logical question and one that I have seen posed frequently, is whether a page that is returned in a high position for a given keyword can be demoted to a lower position for that keyword simply by linking to it with a different keyword. That would seem to be a logical possibility, and I have even seen posts where people present elaborate calculations of difference scores to show how the irrelevant links should affect the score. I brought this question up here:  Can irrelevant link anchor text hurt page positioning?. People have argued about this point quite a lot, but I didn't see that anyone tried to test it. When I asked a support person at an optimization firm, it was evident from the reply that she didn't know either. Since I didn't know the answer and nobody was going to tell me, I decided to get an answer of sorts.
 
I decided to test it out using a Web page that I had previously brought to position 1 in Google for keyword Sexonomy. It is not hard to do that, since there are only a few hundred pages about sexonomy, despite the intriguing sounding name. Not everything to do with sex is "hot" it seems. As I had this page up there, used to prove a previous point about Search Engine Optimization, I could use it to test out this question. I generated numerous links to the Sexonomy page with keyword Mysterology. The result was that the page has (thus far) not moved from #1 place for Sexonomy, but it is now also #1 in Google for Mysterology, at least from this part of the world. Of course, this result may depend on many other things. My page may be so far ahead of the others for keyword Sexonomy that nothing will shake its position. Google may also apply a lot more filtering smarts to words that are frequently searched and commercially valuable. And of course, the Google algorithm might change one day real soon in one of those Google updates.
 
You can try the search and tell me if the same page is #1 for both keywords. Write to me at ami(dot)isseroff(at)gmail(dot).com  
 
Ami Isseroff
 
 
 

Saturday, August 16, 2008

A great optimist: "Blogging Success Takes A Few Months"

A Webpronews article tells us: Blogging Success Takes A Few MonthsOh, really? Is that all it takes? True, the article has some fine print. But the headline is whopper.
 
If you think you are going to put out a Web log that gets thousands, OK, hundreds of readers a week for every article after a month or or two or even three, you are probably in for a big shock unless your name is something like Paris Hilton or Barack Obama.
 
From looking at Alexa ratings and Technorati ratings of Web sites and Web logs, and from experience, I would say that the above headline is like someone writing "Becoming a best selling author takes a few months" or "becoming a successful nuclear physicist takes a few months."
 
Sometimes it will take years, and sometimes you will never be a "success."
 
Most Web logs and Web sites are not what anyone would consider a success measured by number of visitors. Some of those sites and blogs are pretty good too. Many hours are spent in wonderful, important  writing that might be read by half a dozen people in a week. The average article at a large Web site might get as few as 200 visitors in a year, discounting Web spiders.
 
I didn't find that a Web log could be successful with less than a few hundred posts. Unless you write fast and furiously, every day, that is going to take more than a few months.
 
Of course, blogging success can take only a few months (or less)  IF you are publishing at a large platform and if you are already widely known from other publicity. A blog by a famous politician or media personality is usually a  success pretty quickly, even if they write rubbish there.
 
Good content is not going to buy your ticket to blogger heaven or Web site heaven either.
 
Leonard Woolf, (the guy who married Virginia) pointed out in his autobiography that in their publishing business, they found that the long term success of a book was in inverse proportion to the quantity of books sold in the first few years. In the first few years, publishing (anywhere) is like a toilet bowl - the big pieces float to the top.
 
Solid content, which all the wise men tell you is a "must"  -- turns out to be a waste in Web logs from my experience. In informational.  Web sites, solid content is king. Those pages are meant to be read and reread and relied on for reference. In Web Logs, schlock often rules. I confess that some of my biggest blogging successes - relatively - were exploitation posts. A single article that must've gotten a few hundred thousand pagveviews by now was about "Sex, Google and Arabs" - mostly nonsense based on some Google trends data. The articles that I sweated over to provide deep analysis or great graphics are usually ignored. Oh Verily, Why do the Wicked Prosper? Who knows? Junkorama is not excluded in technical  Web logs either. Popular SEO 'experts' often  got to be experts by inventing technical nonsense - myths about optimal page length, disastrous advise to delete pages from the Web, articles that claim you can be a successful blogger in a few months,  etc.
 
The things that your liable to read in the SEO bible, it ain't necessarily so.
 
Ami Isseroff
 
 
 
 
 
 
 
 
 

Friday, August 15, 2008

Worst SEO advice I've seen thus far

I came across what might be the worst Search Engine optimization tip I have seen thus far. Some of these Search Engine Optimization (SEO) Superstitions are not just a waste of time. It seems they can actually hurt site visibility in search engines.
This one is the "advice" that Deleting old Web pages helps SEO. The recommendation is to delete "outdated" pages when doing site maintenance. That breaks existing backlinks as well as reducing the Authority of your site.
 
Another bad one I have seen is the advice that the Optimum length of title text is ten words or more. The text in the title tag should be exactly as long as your keyword and your keyword should not usually be 10 words or more.
 
The same "expert" quoted as giving this advice is the one who boasted of inventing an arbitary optimum page size.  See Optimum Page Size Superstition
 
Some of these SEO experts may be paid by the competition to ruin people's Web sites for them.
 
Check it out!
 
Ami Isseroff
 
 

Tuesday, August 12, 2008

SEO expert admits inventing optimum page size superstition

Buried somewhere in all those newsletters you get from various SEO firms, you will find, believe it or not, an admission by a prominent SEO opitmization "expert" that they had simply invented a supposed optimum size of 250 words for Web page text.
 
Different "experts" and SEO tools have often advocated various small page sizes. This seemed strange to me, because in comparisons, I found that the top pages usually contained thousands of words and the files were 60 KB or larger. Evidently, Web design firms invented this limit in order to be able to show clients that they were getting sites with many Web pages.
 
What is true of course, is that if you only have a limited amount of content, you may want to spread it over more pages to increase the page count of the site, which can help SEO up to a point. Likewise, it may be difficult, depending on your subject, to write in a natural way and still maintain optimal keyword density. However a really large and well written page can draw a hundred or a even a thousand times more visitors to your site than the average page, and it also presents opportunities for extensive embedded links to other pages in your site.
 
The page size optimum is one of many Search Engine Optimization (SEO) Superstitions I examine. The latest ones include:
 
Beware of contradictory and strange claims by SEO "experts" that don't make sense and contradict your own experience.
 
Ami Isseroff
 
 

Another SEO superstition - title length

A recent article quotes an "expert" as saying that text of title tags must be at least ten words long in order to place well in search engines!
 
This is certainly false. Title tag text should be precisely as long as is needed to contain the page keyword, and keyword should almost never be ten words long.
 

Big Brother Google is watching you - and not just them

And Google, the leading online advertiser, stated that it has begun using Internet tracking technology that enables it to more precisely follow Web-surfing behavior across affiliated sites.
 
We know that Google is already gathering all sorts of information through their online software, and likewise Yahoo! and MSN. Now the plot thickens a bit. The technical question is, is how is Google - and how are others - going to be using this information to shape their advertising facilities and search engine behavior? The paranoid question, is what sort of unethical use can be made of these data, either by corporations or rogue employees?? If Joe Smith finds out you are looking at photos of naughty nude ladies and other "good stuff" can he trace you from you your IP and use this information for blackmail?
 
And, by the way, never being a sentence with "And" - even if you write for the Washington Post!
 
 
By Ellen Nakashima
Washington Post Staff Writer
Tuesday, August 12, 2008; D01
 
Several Internet and broadband companies have acknowledged using targeted-advertising technology without explicitly informing customers, according to letters released yesterday by the House Energy and Commerce Committee.
 
And Google, the leading online advertiser, stated that it has begun using Internet tracking technology that enables it to more precisely follow Web-surfing behavior across affiliated sites.
 
The revelations came in response to a bipartisan inquiry of how more than 30 Internet companies might have gathered data to target customers. Some privacy advocates and lawmakers said the disclosures help build a case for an overarching online-privacy law.
 
"Increasingly, there are no limits technologically as to what a company can do in terms of collecting information . . . and then selling it as a commodity to other providers," said committee member Edward J. Markey (D-Mass.), who created the Privacy Caucus 12 years ago. "Our responsibility is to make sure that we create a law that, regardless of the technology, includes a set of legal guarantees that consumers have with respect to their information."
 
Markey said he and his colleagues plan to introduce legislation next year, a sort of online-privacy Bill of Rights, that would require that consumers must opt in to the tracking of their online behavior and the collection and sharing of their personal data.
 
But some committee leaders cautioned that such legislation could damage the economy by preventing small companies from reaching customers. Rep. Cliff Stearns (R-Fla.) said self-regulation that focuses on transparency and choice might be the best approach.
 
Google, in its letter to committee Chairman John Dingell (D-Mich.), Markey, Stearns and Rep. Joe L. Barton (R-Tex.), stressed that it did not engage in potentially the most invasive of technologies -- deep-packet inspection, which companies such as NebuAd have tested with some broadband providers. But Google did note that it had begun to use across its network the "DoubleClick ad-serving cookie," a computer code that allows the tracking of Web surfing.
 
Alan Davidson, Google's director of public policy and government affairs, stated in the letter that users could opt out of a single cookie for both DoubleClick and the Google content network. He also said that Google was not yet focusing on "behavioral" advertising, which depends on Web site tracking.
 
But on its official blog last week, Google touted how its recent $3.1 billion merger with DoubleClick provides advertisers "insight into the number of people who have seen an ad campaign," as well as "how many users visited their sites after seeing an ad."
 
"Google is slowly embracing a full-blown behavioral targeting over its vast network of services and sites," said Jeffrey Chester, executive director of the Center for Digital Democracy. He said that Google, through its vast data collection and sophisticated data analysis tools, "knows more about consumers than practically anyone."
 
Microsoft and Yahoo have disclosed that they engage in some form of behavioral targeting. Yahoo has said it will allow users to turn off targeted advertising on its Web sites; Microsoft has yet to respond to the committee.
 
More than a dozen of the 33 companies queried said they do not conduct targeted advertising based on consumers' Internet activities. But, Chester said, a number of them engage in sophisticated interactive marketing. Advertisers on Comcast.net's site, for instance, are able to target advertising based on "over 3 billion page views" from "15 million unique users."
 
Comcast spokeswoman Sena Fitzmaurice stressed that the data are gathered exclusively for advertising on that site.
 
In their letters, Broadband providers Knology and Cable One acknowledged that they recently ran tests using deep-packet-inspection technology provided by NebuAd to see whether it could help them serve up more relevant ads, but their customers were not explicitly alerted to the test. Cable One is owned by The Washington Post Co.
 
Both companies said that no personally identifiable information was used and that they have ended the trials. Cable One has no plans to adopt the technology, spokeswoman Melany Stroupe said. "However, if we do," she said, "we want people to be able to opt in."
 
Ari Schwartz, vice president of the Center for Democracy and Technology, said lawmakers are beginning to understand the convergence across platforms. "People are starting to see: 'Oh, we have these different industries that are collecting the same types of information to profile individuals and the devices they use on the network," he said. "Internet. Cellphones. Cable. Any way you tap into the network, concerns are raised."
 
Markey said yesterday that any legislation should generally require explicitly informing the consumer of the type of information that is being gathered and any intent to use it for a different purpose, and a right to say 'no' to the collection or use.
 
The push for overarching legislation is bipartisan. "A broad approach to protecting people's online privacy seems both desirable and inevitable," Barton said. "Advertisers and data collectors who record where customers go and what they do want profit at the expense of privacy."
 
As of yesterday evening, the committee had posted letters from 25 companies on its Web site.