Total Pageviews

Online Reputation Management Training

What is Online Reputation Management
Why do I need ORM
ORM Techniques
Defense Mechanism
- Sub Domain creation
- Create Additional Sites
- Site Links and Double Listing
- Double Listing
- Wikipedia business page
- Presell Pages
- Press Release
- Create profile on other site
- Tagging
- Buying well ranking Sites
- PPC
- Optimize Website for Important terms
Defensive Ranking
- The Concept
- Legal Actions
Online Reputation Monitoring Tools
- RSS Alerts
- Google Reader
- Google Alert
- Comment Tracking
- Social Monitoring
- Twitter Alerts
- Personal Monitoring
ORM tips and tricks
ORM Model
ORM Guidelines
Removing Negative comments from Google

Pay Per Click Course (PPC) Course


Pay Per Click Course (PPC) Course


PPC Training Introduction
What is Pay per Click Marketing (PPC)
Why we need PPC
Importance & Benefits of PPC
Other Pay-Per-Click Providers
Major Pay Per Click Search Engines
Google AdWords
Yahoo Search Marketing (Overture)
Set-up PPC Campaign
Google Adword Account Structure
PPC campaign Navigation

How to set up PPC Campaign
Set-up PPC Campaign
Google Adword Account Structure
PPC campaign Navigation
Use Multiple Account
Use My Client Center (MCC)
What is "Click-through-Rates" (CTRs)
What is Impression?
What is Conversion?
What is "Cost/Conversion"?
How to increase CTR & Conversion
What is Tracking Code?
How to do Keyword Research for PPC
What is Keyword Research?
Difference between SEO & PPC keywords
Research PPC Keywords
Importance of target keywords
Select Targeted/related Keywords
Analyze Competitors keywords
Find Keywords popularity & Search Volume
Categorize Keywords in Ad groups
PPC Keywords tools and resources
How to Create Ads for PPC Campaigns
Create Effective Ads Ad groups
Unique Title
Measurement of Title, Description URL
Ad that produce better ROI
Example of Successfully Effective ads
Bids Management in PPC
What is bidding?
What is Quality Score?
How Quality Score Effect on Bids?
How to Increase Position on Search?
Bid for Ad position
Define Bid for Each Keywords and Bid Management
User Define bids and Google Automatic Bids
Importance of bidding techniques
Competitor�s Analysis for bidding
How important is Landing Page for PPC
What is Landing Page?
Ads versus Landing Page
Important of Landing Page
Optimize your landing pages
How to Increase conversion rates
What is "Click-through-Rates" (CTRs)
Use 'Calls to Action'
Cost/Conversion
PPC reporting structure
Campaign Performance Reports
Keywords Performance Reports
Ad group Performance Reports
Ads Performance Reports
PPC Campaigns Tools
Google Adword Tool
Keywords Spy
Google Adword Editor

Web Analytics Training

Web Analytics Training


Is a hard to define term, being used quite liberally by those who work in the world of Internet marketing, the concept makes no sense to a layman.
So let's look at an official definition of this term. So as to understand what exactly does this concept embody?
Web Analytics Association better known as WAA, is an organisation that defines industry standards for the web analytics field. As per this organisation 'Web Analytics' is defined as:
Web Analytics is the measurement, collection, analysis and reporting of Internet data for the purposes of understanding and optimizing Web usage.
Now that we know the definition let's look at what really is web analytics, is it merely a 'cool term' used to cloak the boring world of Web statistics, or is there more to this term?
Quite commonly Web Analytics gets confused with Web Site Statistics; but it is not so. There exists strong distinction between the two.
 Web site Statistics is just what it states, it merely gathers the factual data for your website, and presets to you in a statistical format.
Whereas Web Analytics takes this data a step ahead and works by analysing the statistical data to present meaningful results and conclusion to you regarding your website.
In I state a factual figure that 240 visitors visit your website, of them 150 leave from homepage, 20 from page No 2, 30 from Page No 3 & So on. What information did you get out of this, mere figures, which do not provide any underlying meaning. As a website owner, you will be more interested in knowing as to why are 150 of your visitors leaving from homepage itself... or why are your visitors leaving at all.. The numbers by themselves are useless.
Web Analytics will solve this problem for you, it will take up these numbers, and analyse them using different statistical and analytical tools. Final result will be a trend line of your visitors, a flowchart of the pattern followed by your visitors etc. The same number which held no meaning by themselves, now make sense. They embody certain trends and behaviour patterns.
This is what web analytics is all about, it's about delving deeper into the data, looking beyond the superficial and finding out the lines that interlinks them. Analytics is about connecting the dots until a recognisable pattern emerges and then presenting it to you.
Web Analytics works on answering the why of things. It focuses on explaining to you, the cause behind the numbers and the bringing forth the hidden pattern they form.
No we are not denying the overlap, after all statistics forms the base of Analytics; however, while statistics is a part of the jigsaw puzzle, analytics is the whole jigsaw puzzle itself.

Social Media Marketing Course


Social Media Marketing Course



Below are the Social Media Marketing Course content -

  1. Definition of social media
  2. Types of social media
  3. Key terms to understand
  4. How Social Media influences audience
  5. How Social Media is affecting Google Search
  6. How to choose right social media
  7. Developing unique content, positioning and voice
  8. How to generate Word of mouth
  9. Integrating social media into your website and blogs
  10. How to amplify content with multiple Social Media channels - Viral Marketing
  11. Using Twitter
    1. What is Twitter
    2. Why we love it
    3. Opportunity
    4. How to Setup a Twitter account
    5. Tips about setting up a Twitter account: Personal Bio | Profile Picture | Background Picture
    6. Following and Listening
    7. Building Relationship
    8. Tools for managing your Tweets
    9. Finding People and Companies on Twitter
    10. Understanding the Twitter Lingo
    11. Twitter Guidelines
    12. Twitter Tools
    13. Reputation Management | Keyword Research | Competition Analysis
    14. Automate Twitter
    15. How to Shorten and Measure your URLs
  12. Using Facebook
    1. Setting up Facebook and Privacy
    2. What Can You Do With Facebook
    3. Facebook Features: Photo Album | Events | The Wall and Notes | Chat | Groups and Fan Pages
    4. Facebook Benefits
    5. Facebook Fan Pages
    6. Facebook Profile
    7. Group Pages vs. Fan Pages
    8. Facebook Pages - what can you do
    9. How to promote your Facebook page
    10. Engagement and Conversation
    11. Being Found in Real Time Search
    12. Creating Facebook Application / Widget
    13. Pro and Con of using Facebook
    14. Linking with YouTube
    15. Creating Events
    16. Building content calendar
  13. Using LinkedIn
    1. What is LinkedIn
    2. LinkedIn Answers
    3. LinkedIn Groups
    4. How to do link building in LinkedIn
    5. Creating SEO friendly url
    6. Pro & Con of using LinkedIn
  14. Using Google Buzz
    1. What is Google Buzz
    2. Google Buzz and Privacy Issues
  15. Google Plus
    1. What is Google Plus
    2. Features
    3. Tools & Techniques
    4. Google Plus: Circles  |  Hangouts  | Stream
    5. Google Plus goes Mobile
    6. Google + 1
    7. Google Plus for Businesses
  16. MySpace
  17. Kaboodle (only for product based site)
  18. Do and Don't of Social Networking
  19. Video optimization
    1. YouTube
    2. MetaCafe
    3. Vimeo
    4. AOL Videos
  20. RSS feed optimization
  21. Wikies
  22. Blog / Micro-blog
    1. Do and Don't
    2. Widgets
    3. Rules of Corporate blogging
    4. Tips and tricks for interesting articles
    5. Publishing and networking via blog
    6. Blog promotion
    7. Myblog
    8. Post updation
    9. Blog Commenting
  23. Bookmarking
    1. StumbleUpon
    2. Digg
    3. Reditt
    4. Delicious
    5. Fave It
    6. E-buzz
  24. Power Point Presentation
    1. Slideshare
  25. Photo sharing
    1. Flickr
    2. Picasa Web
    3. TinyPic
  26. Forum and Online Communities
    1. Yahoo Q & A
    2. Answers.com
    3. Forum comments
    4. Google forum
    5. Yahoo groups
  27. Press Release/ News
    1. Writing with keywords
    2. Maximizing coverage
    3. Distribution
  28. Article creation & Submission
  29. Content sharing
    1. Squidoo lens
    2. Hubpages
    3. Scribd
  30. Behavioral and cultural standard for Social Media Interaction
  31. Linking all Social Media Accounts
  32. Optimizing Social Media content for Search Engine
  33. Importance of Short URL and how to do
  34. Link wheel creation
  35. Cleaning negative result using SEO
  36. Measuring SM and ROI

Basic SEO training course content


SEO Course will contain the following :


SEO Introduction:
  • Brief on Search Marketing
  • What is SEO
  • Importance of SEO
  • SEO Process
  • Black hat techniques
  • Search Engines and Directories
  • SEO Industry Research, Figures
  • How Search Engine works
SEO Research & Analysis:
  • Market Research
  • Keyword Research and Analysis
  • Keyword opportunity
  • Competitors Website Analysis
  • KEI Analysis
  • How to Choose Best Keywords
  • Tools available for Keyword Research
Website Design SEO Guidelines:
  • Content Research
  • Content Guidelines
  • Content Optimization
  • Design & Layout
  • HTML Coding Optimization & Standards
  • XML Sitemap / URL List Sitemap
On-page Optimization:
  • The Page Title
  • Body Text & Keyword Density
  • Headings
  • Bold Text
  • Domain Names & Suggestions
  • Canonical Tag
  • Meta Tags
  • Images and Alt Text
  • Link Titles
  • Internal Link Building
  • The Sitemap
  • Invisible Text
  • Server and Hosting Check
  • Robots Meta Tag
  • Doorway Pages
  • 301 Redirects
  • Duplicate content
Off-page Optimization:
  • Page Rank
  • Link Popularity
  • Link Building in Detail
  • Articles
  • Links Exchange
  • Reciprocal Linking
  • Posting to Forums
  • Directory Submission
  • Blog Submission
  • Submission to Search Engine
  • RSS Feeds Submissions
  • Press Release Submissions
  • Directory Submission Checklist
  • Forum Link Building
  • Competitor Link Analysis
Analytics:
  • Google Analytics
  • Installing Google Analytics
  • How to Study Google Analytics
  • Interpreting Bars & Figures
  • How Google Analytics can Help SEO
  • Advanced Reporting
  • Webmaster Central & Yahoo! Site Explorer
  • Open Site Explorer
  • Website Analysis using various SEO Tools available
Social Media Marketing:
  • Social Networking
  • Social Bookmarking
  • Press Releases
  • Articles / Directories
  • Blogging / Classifieds
  • Forums / RSS Feeds
SEO Tools:
  • Keyword Density Analyzer Tools
  • Google Tools
  • Yahoo Tools
  • Bing Tools
  • Text Tools
  • Comparison Tools
  • Link Popularity Tools
  • Search Engines Tools
  • Site Tools
  • Miscellaneous Tools

What Is Latent Semantic Indexing


Latent Semantic Indexing (LSI) is a system used by Google and other major search engines (Update: read here, thanks for the Tweets, guys) The contents of a webpage are crawled by a search engine and the most common words and phrases are collated and identified as the keywords for the page. LSI looks for synonyms related to the title of your page. For example, if the title of your page was “Classic Cars”, the search engine would expect to find words relating to that subject in the content of the page as well, i.e. “collectors”, “automobile”, “Bentley”, “Austin” and “car auctions”.

Do Not Underestimate Content

SEO strategy has always denoted that great importance is placed on the page title and words encased in heading tags, especially the H1 tag. Words and phrases within the content that are bolded or italicized are also given a greater importance. But you should be aware of the use of LSI, as it can affect what keywords your website is ranked for.
But if your page contains synonyms, the search engine recognizes that your page is actually about the subject title and will place greater importance on the page. You may well already use good keyword techniques and add a few secondary keywords into your content, but the rest of the content should also be littered with synonyms to convince search engine spiders.

A Response to Keyword-Stuffing

Latent Semantic Indexing came as a direct reaction to people trying to cheat search engines by cramming Meta keyword tags full of hundreds of keywords, Meta description full of more keywords, and page content full of nothing more than random keywords and no subject-related material or worthwhile content.
Search engines, like Google, appreciate good content, and encouraging people to add good content that helps keep the high-ranked listings relevant. Although producing good content will not guarantee you first page rankings, it could improve your quality score.

When LSI is Not Relevant

LSI will not affect a squeeze page that has no intention of achieving a search engine rank anyway, due to its minimalistic content. But for site owners or bloggers hoping to get on the search engines good side, pay attention to LSI.
Latent Semantic Indexing is a good thing. It keeps content relevant and rich and benefits not only visitors, but website owners that produce quality material.

How to Befriend LSI

Latent Semantic Indexing is not rocket science, it is simple common sense. Here are some simple guidelines:
  1. If your page title is Learn to Play Tennis, make sure your article is about tennis.
  2. Do not overuse your keywords in the content. It could look like keyword stuffing and the search engines may red flag you.
  3. Never use Article Spinning Software – it spits out unreadable garble.
  4. If you outsource your content, choose a quality source.
  5. Check Google Webmaster Tools and see what keywords your pages are ranking for.
Latent Semantic Indexing is not a trick. You should bear it in mind when adding content to a web page, but do not get paranoid about it. The chances are if you provide quality, relevant content you will never have to worry about falling foul of and LSI checks.

source: searchenginejournal

301 Redirect

source: webconfs

301 Redirect

301 redirect is the most efficient and Search Engine Friendly method for webpage redirection. It's not that hard to implement and it should preserve your search engine rankings for that particular page. If you have to change file names or move pages around, it's the safest option. The code "301" is interpreted as "moved permanently".
You can Test your redirection with Search Engine Friendly Redirect Checker
Below are a Couple of methods to implement URL Redirection via code and htaccess redirect

IIS Redirect

  • In internet services manager, right click on the file or folder you wish to redirect
  • Select the radio titled "a redirection to a URL".
  • Enter the redirection page
  • Check "The exact url entered above" and the "A permanent redirection for this resource"
  • Click on 'Apply'

ColdFusion Redirect

<.cfheader statuscode="301" statustext="Moved permanently">
<.cfheader name="Location" value="http://www.new-url.com">


PHP Redirect

<?
Header( "HTTP/1.1 301 Moved Permanently" );
Header( "Location: http://www.new-url.com" );
?>


ASP Redirect

<%@ Language=VBScript %>
<%
Response.Status="301 Moved Permanently"
Response.AddHeader "Location","http://www.new-url.com/"
%>


ASP .NET Redirect

<script runat="server">
private void Page_Load(object sender, System.EventArgs e)
{
Response.Status = "301 Moved Permanently";
Response.AddHeader("Location","http://www.new-url.com");
}
</script>


JSP (Java) Redirect

<%
response.setStatus(301);
response.setHeader( "Location", "http://www.new-url.com/" );
response.setHeader( "Connection", "close" );
%>


CGI PERL Redirect

$q = new CGI;
print $q->redirect("http://www.new-url.com/");


Ruby on Rails Redirect

def old_action
headers["Status"] = "301 Moved Permanently"
redirect_to "http://www.new-url.com/"
end


Redirect Old domain to New domain using htaccess redirect

Create a .htaccess file with the below code, it will ensure that all your directories and pages of your old domain will get correctly redirected to your new domain.
The .htaccess file needs to be placed in the root directory of your old website (i.e the same directory where your index file is placed)
Options +FollowSymLinks
RewriteEngine on
RewriteRule (.*) http://www.newdomain.com/$1 [R=301,L]
Please REPLACE www.newdomain.com in the above code with your actual domain name.
In addition to the redirect I would suggest that you contact every backlinking site to modify their backlink to point to your new website.
Note* This .htaccess method of redirection works ONLY on Linux servers having the Apache Mod-Rewrite moduled enabled.

Redirect to www using htaccess redirect

Create a .htaccess file with the below code, it will ensure that all requests coming in to domain.com will get redirected to www.domain.com
The .htaccess file needs to be placed in the root directory of your old website (i.e the same directory where your index file is placed)
Options +FollowSymlinks
RewriteEngine on
rewritecond %{http_host} ^domain.com [nc]
rewriterule ^(.*)$ http://www.domain.com/$1 [r=301,nc]
Please REPLACE domain.com and www.newdomain.com with your actual domain name.
Note* This .htaccess method of redirection works ONLY on Linux servers having the Apache Mod-Rewrite moduled enabled.

How to Redirect HTML

Please refer to section titled 'How to Redirect with htaccess', if your site is hosted on a Linux Server and 'IIS Redirect', if your site is hosted on a Windows Server.

Free online tools

Free online tools

For SBM
http://www.socialmarker.com/

For Directory
http://www.fastdirectorysubmitter.com/

Classified
adbot.com

Online Searchengine submissions
http://freewebsubmission.com/

What is A/B testing and Multivariate Testing

In internet marketing, multivariate testing is a process by which more than one component of a website may be tested in a live environment. It can be thought of in simple terms as numerous A/B tests performed on one page at the same time. A/B tests are usually performed to determine the better of two content variations; multivariate testing can theoretically test the effectiveness of limitless combinations. The only limits on the number of combinations and the number of variables in a multivariate test are the amount of time it will take to get a statistically valid sample of visitors and computational power.

A/B Testing Create two (or more) different versions of your website and see which one performs better

Multivariate Testing Discover which combination of changes (in headline, images, etc.) maximizes conversions 




source: http://visualwebsiteoptimizer.com/
wikipedia

What is Spamdexing

1) Content spam

These techniques involve altering the logical view that a search engine has over the page's contents. They all aim at variants of the vector space model for information retrieval on text collections.

Keyword stuffing

Keyword stuffing involves the calculated placement of keywords within a page to raise the keyword count, variety, and density of the page. This is useful to make a page appear to be relevant for a web crawler in a way that makes it more likely to be found. Example: A promoter of a Ponzi scheme wants to attract web surfers to a site where he advertises his scam. He places hidden text appropriate for a fan page of a popular music group on his page, hoping that the page will be listed as a fan site and receive many visits from music lovers. Older versions of indexing programs simply counted how often a keyword appeared, and used that to determine relevance levels. Most modern search engines have the ability to analyze a page for keyword stuffing and determine whether the frequency is consistent with other sites created specifically to attract search engine traffic. Also, large webpages are truncated, so that massive dictionary lists cannot be indexed on a single webpage.

Hidden or invisible text

Unrelated hidden text is disguised by making it the same color as the background, using a tiny font size, or hiding it within HTML code such as "no frame" sections, alt attributes, zero-sized DIVs, and "no script" sections. People screening websites for a search-engine company might temporarily or permanently block an entire website for having invisible text on some of its pages. However, hidden text is not always spamdexing: it can also be used to enhance accessibility.

Meta-tag stuffing

This involves repeating keywords in the Meta tags, and using meta keywords that are unrelated to the site's content. This tactic has been ineffective since 2005.

Doorway pages

"Gateway" or doorway pages are low-quality web pages created with very little content but are instead stuffed with very similar keywords and phrases. They are designed to rank highly within the search results, but serve no purpose to visitors looking for information. A doorway page will generally have "click here to enter" on the page. In 2006, Google ousted BMW for using "doorway pages" to the company's German site, BMW.de.[7]

Scraper sites

Scraper sites are created using various programs designed to "scrape" search-engine results pages or other sources of content and create "content" for a website.[citation needed] The specific presentation of content on these sites is unique, but is merely an amalgamation of content taken from other sources, often without permission. Such websites are generally full of advertising (such as pay-per-click ads[8]), or they redirect the user to other sites. It is even feasible for scraper sites to outrank original websites for their own information and organization names.

Article spinning

Article spinning involves rewriting existing articles, as opposed to merely scraping content from other sites, to avoid penalties imposed by search engines for duplicate content. This process is undertaken by hired writers or automated using a thesaurus database or a neural network.

2) Link spam

Link spam is defined as links between pages that are present for reasons other than merit.[9] Link spam takes advantage of link-based ranking algorithms, which gives websites higher rankings the more other highly ranked websites link to it. These techniques also aim at influencing other link-based ranking techniques such as the HITS algorithm.

Link-building software

A common form of link spam is the use of link-building software to automate the search engine optimization process.

Link farms

Link farms are tightly-knit communities of pages referencing each other, also known facetiously as mutual admiration societies.[10]

Hidden links

Putting hyperlinks where visitors will not see them to increase link popularity. Highlighted link text can help rank a webpage higher for matching that phrase.

Sybil attack

A Sybil attack is the forging of multiple identities for malicious intent, named after the famous multiple personality disorder patient "Sybil" (Shirley Ardell Mason). A spammer may create multiple web sites at different domain names that all link to each other, such as fake blogs (known as spam blogs).

Spam blogs

Spam blogs are blogs created solely for commercial promotion and the passage of link authority to target sites. Often these "splogs" are designed in a misleading manner that will give the effect of a legitimate website but upon close inspection will often be written using spinning software or very poorly written and barely readable content. They are similar in nature to link farms.

Page hijacking

Page hijacking is achieved by creating a rogue copy of a popular website which shows contents similar to the original to a web crawler but redirects web surfers to unrelated or malicious websites.

Buying expired domains

Some link spammers monitor DNS records for domains that will expire soon, then buy them when they expire and replace the pages with links to their pages. See Domaining. However, Google resets the link data on expired domains. Some of these techniques may be applied for creating a Google bomb — that is, to cooperate with other users to boost the ranking of a particular page for a particular query.

Cookie stuffing

Cookie stuffing involves placing an affiliate tracking cookie on a website visitor's computer without their knowledge, which will then generate revenue for the person doing the cookie stuffing. This not only generates fraudulent affiliate sales, but also has the potential to overwrite other affiliates' cookies, essentially stealing their legitimately earned commissions.

Using world-writable pages

Web sites that can be edited by users can be used by spamdexers to insert links to spam sites if the appropriate anti-spam measures are not taken.
Automated spambots can rapidly make the user-editable portion of a site unusable. Programmers have developed a variety of automated spam prevention techniques to block or at least slow down spambots.

Spam in blogs

Spam in blogs is the placing or solicitation of links randomly on other sites, placing a desired keyword into the hyperlinked text of the inbound link. Guest books, forums, blogs, and any site that accepts visitors' comments are particular targets and are often victims of drive-by spamming where automated software creates nonsense posts with links that are usually irrelevant and unwanted. Many of the blogs like, Wordpress or Blogger, make their comments sections nofollow by default due to concerns over spam.[citation needed]

Comment spam

Comment spam is a form of link spam that has arisen in web pages that allow dynamic user editing such as wikis, blogs, and guestbooks. It can be problematic because agents can be written that automatically randomly select a user edited web page, such as a Wikipedia article, and add spamming links.[11]

Wiki spam

Wiki spam is a form of link spam on wiki pages. The spammer uses the open editability of wiki systems to place links from the wiki site to the spam site. The subject of the spam site is often unrelated to the wiki page where the link is added. In early 2005, Wikipedia implemented a default "nofollow" value for the "rel" HTML attribute. Links with this attribute are ignored by Google's PageRank algorithm. Forum and Wiki admins can use these to discourage Wiki spam.

Referrer log spamming

Referrer spam takes place when a spam perpetrator or facilitator accesses a web page (the referee), by following a link from another web page (the referrer), so that the referee is given the address of the referrer by the person's Internet browser. Some websites have a referrer log which shows which pages link to that site. By having a robot randomly access many sites enough times, with a message or specific address given as the referrer, that message or Internet address then appears in the referrer log of those sites that have referrer logs. Since some Web search engines base the importance of sites on the number of different sites linking to them, referrer-log spam may increase the search engine rankings of the spammer's sites. Also, site administrators who notice the referrer log entries in their logs may follow the link back to the spammer's referrer page.

Other types of spamdexing

Mirror websites

A mirror site is the hosting of multiple websites with conceptually similar content but using different URLs. Some search engines give a higher rank to results where the keyword searched for appears in the URL.

URL redirection

URL redirection is the taking of the user to another page without his or her intervention, e.g., using META refresh tags, Flash, JavaScript, Java or Server side redirects. However, 301 Redirect or permanent redirect is not considered as a malicious behaviour.

Cloaking

Cloaking refers to any of several means to serve a page to the search-engine spider that is different from that seen by human users. It can be an attempt to mislead search engines regarding the content on a particular web site. Cloaking, however, can also be used to ethically increase accessibility of a site to users with disabilities or provide human users with content that search engines aren't able to process or parse. It is also used to deliver content based on a user's location; Google itself uses IP delivery, a form of cloaking, to deliver results. Another form of cloaking is code swapping, i.e., optimizing a page for top ranking and then swapping another page in its place once a top ranking is achieved.

google panda and google penguin


Google Penguin is a code name[1] for a Google algorithm update that was first announced on April 24, 2012. The update is aimed at decreasing search engine rankings of websites that violate Google’s Webmaster Guidelines[2] by using now declared black-hat SEO techniques, such as keyword stuffing,[3] cloaking,[4] participating in link schemes,[5] deliberate creation of duplicate content,[6] and others.

The main target of Google Penguin is spamdexing (including link bombing).

In computing, spamdexing (also known as search spam, search engine spam, web spam or search engine poisoning)[1] is the deliberate manipulation of search engine indexes. It involves a number of methods, such as repeating unrelated phrases, to manipulate the relevance or prominence of resources indexed in a manner inconsistent with the purpose of the indexing system.[2][3] It could be considered to be a part of search engine optimization, though there are many search engine optimization methods that improve the quality and appearance of the content of web sites and serve content useful to many users.[4] Search engines use a variety of algorithms to determine relevancy ranking. Some of these include determining whether the search term appears in the body text or URL of a web page. Many search engines check for instances of spamdexing and will remove suspect pages from their indexes. Also, people working for a search-engine organization can quickly block the results-listing from entire websites that use spamdexing, perhaps alerted by user complaints of false matches. The rise of spamdexing in the mid-1990s made the leading search engines of the time less useful. Using sinister methods to have websites rank higher in search engine results is commonly referred to in the SEO (Search Engine Optimization) industry as "Black Hat SEO."[5]
 

Google Panda is a change to Google's search results ranking algorithm that was first released in February 2011. The change aimed to lower the rank of "low-quality sites" or "thin sites",[1] and return higher-quality sites near the top of the search results. CNET reported a surge in the rankings of news websites and social networking sites, and a drop in rankings for sites containing large amounts of advertising.[2] This change reportedly affected the rankings of almost 12 percent of all search results.[3] Soon after the Panda rollout, many websites, including Google's webmaster forum, became filled with complaints of scrapers/copyright infringers getting better rankings than sites with original content. At one point, Google publicly asked for data points[4] to help detect scrapers better. Google's Panda has received several updates since the original rollout in February 2011, and the effect went global in April 2011. To help affected publishers, Google published an advisory on its blog,[5] thus giving some direction for self-evaluation of a website's quality. Google has provided a list of 23 bullet points on its blog answering the question of "What counts as a high-quality site?" that is supposed to help webmasters "step into Google's mindset".[6]

source: wikipedia

SEO Job Requirements


Manager - Web Analytics

  • Evaluate business goals and objectives from multiple business teams and develop tracking/tagging strategies to allow individuals and teams to measure success
  • Consulting directly with clients and/or their agencies on projects requiring web analytics platform selection, implementation, platform remediation, and dashboard development
  • Work with client development teams to install, configure & use web analytics services such as Google Analytics, Omniture, WebTrends, Urchin, Yahoo! Web Analytics and others
  • Perform quality assurance tests on tracking implementations
  • Assist with tracking and improving the results of clients marketing campaigns
  • Build, update and deliver weekly and monthly reports, scorecards, dashboards and key performance reports using Excel and BI tools
  • Build automated Dashboards and Scorecards for various business stakeholders
  • Provide best practices consulting services to clients in solving their Web analytics platform strategy and technical needs
  • Partner with multiple business units within the client’s organization as well as outside the company to ensure that best practices in metrics and decision making are being exposed to the client management and core website decision makers
  • Conduct platform training and knowledge sharing in web analytics for clients and the project team

Project Manager - Pay Per Click


What we are looking for


  • 3-6 years experience
  • BE (any stream) from a reputed engineering college
  • GAP Certified Professional with proficiency in handling Google AdWords, Yahoo and Bing Paid Search Marketing platforms
  • Manage complex projects with an ability to handle multiple projects at a time
  • Set and meet deadlines for self as well as others in the Team
  • Independently handle clients
  • Maximize client’s profits by using your creative and analytical skills to write advertisements, plan campaign strategy, analyze traffic statistics and manage client’s marketing budget effectively
  • Show results with majority of campaigns performing better than benchmarks
  • Proactively research and contribute to the research team
  • Prior experience in handling bid management tools is an additional (not necessary) advantage
  • Strong in verbal, numerical and strategic reasoning
  • Keen interest in technology based marketing and advertising, e-commerce and other web based channels
  • Superb communication skills
  • Self-starter / self-motivated individual
  • People Management
  • Disciplined: On attendance, deliverables and adhering to company ethical standards
  • High intellectual curiosity; willingness to learn and explore

What you'll do


  • Mentoring and guiding sub-ordinates and inculcating the culture of the firm
  • Ensure timely implementation of projects
  • Primary contact point for the client
  • Leading a team of approximately 5 (after 4-6 months) in developing internet marketing strategies for clients in India and abroad
  • Day to day client servicing and co-coordinating between the client and our team
  • Setting and meeting deadlines / team-lines in terms of project deliverables

Project Manager - SEO


Responsibilities


  • Handling a team of 5 Assistant Project Managers
  • Help develop SEO strategies for clients
  • Should be capable of handling multiple SEO projects at a time
  • Should be able to analyze Analytics and Webmaster tools data and take action on it to improve the overall SEO performance
  • Help your team members
  • Handle on page & off page SEO activities for multiple clients
  • Understand the business of the client
  • Analyse a website from an SEO and Technical perspective
  • Perform a through keyword analysis for the site
  • Make an On Page Optimization incorporating the various on page elements involved in SEO
  • Get the on page optimization report implemented
  • Formulate a link building strategy for your projects
  • Analyze the progress of your site using various analytical and webmaster tools
  • Tweak the on page and off page optimization depending on performance
  • Keeping abreast with the latest updates in Search engine Algorithms & Search Word In General
  • Keeping abreast with the latest happenings in Google Analytics Platforms and Google Webmaster World
  • Keeping abreast with progress in the internet marketing world on the whole
  • Sharing your research with the entire team
  • Handling the day to day activities of your Assistant Project Mangers
  • Communicate with Clients and Update them on the progress of their project
  • Identify Upselling opportunities within projects
  • Manage the workload of your team
  • Help in hiring initiatives for the SEO Team
  • Participation in the training process
  • Share your learning with the entire SEO team & the rest of the company in General

Job Related Skills


  • Good Understanding of HTML
  • Capable of Comprehending Web technologies such as PHP, ASP, Java
  • Should have a thorough knowledge of Search Engines and their Algorithms
  • GAIQ Certification
  • Knowledge on how the search industry is progressing in general
  • Knowledge of Internet Marketing as a whole
  • Knowledge of techniques used to generate revenue online
  • Knowledge on how the search industry is progressing in general
  • Knowledge of Internet Marketing as a whole

Soft Skills


  • Excellent written and Spoken English
  • Good Communication Skills
  • Good Interpersonal Skills
  • Training Skills
  • Great people managing skills
  • Handling Multiple clients, team members at one go
  • Capable of collaborating with other teams
  • Capable of leading Client Calls

Most Important Skills


  • People management and training skills
  • Identify underperforming projects and strategize a turnaround
  • Incorporate latest happenings in the Search world into the internal process and track results
  • Make presentations and case studies
  • Lead knowledge transfer Initiatives


 


 

How to Convert from HTML to XHTML

How to Convert from HTML to XHTML

  1. Add an XHTML <!DOCTYPE> to the first line of every page<!DOCTYPE html>
  2.  Add an xmlns attribute to the html element of every page: <html xmlns="http://www.w3.org/1999/xhtml">
  3.  Change all element names to lowercase
  4. Close all empty elements
  5. Change all attribute names to lowercase
  6. Quote all attribute values

 Difference between HTML and DHTML
HTML (Hyper Text Markup Language) is the most widely accepted language used to build websites. It is the main structure of a website. It builds tables, creates divisions, gives a heading message (In the title bar of programs), and actually outputs text.

dHTML (Dynamic Hyper Text Markup Language) is not a language, but the art of using HTML, JavaScript, and CSS together to create dynamic things, such as navigation menus. 

Information about buying and selling links that pass PageRank

Information about buying and selling links that pass PageRank

source: http://googlewebmastercentral.blogspot.in

Our goal is to provide users the best search experience by presenting equitable and accurate results. We enjoy working with webmasters, and an added benefit of our working together is that when you make better and more accessible content, the internet, as well as our index, improves. This in turn allows us to deliver more relevant search results to users.

If, however, a webmaster chooses to buy or sell links for the purpose of manipulating search engine rankings, we reserve the right to protect the quality of our index. Buying or selling links that pass PageRank violates our webmaster guidelines. Such links can hurt relevance by causing:

- Inaccuracies: False popularity and links that are not fundamentally based on merit, relevance, or authority
- Inequities: Unfair advantage in our organic search results to websites with the biggest pocketbooks

In order to stay within Google's quality guidelines, paid links should be disclosed through a rel="nofollow" or other techniques such as doing a redirect through a page which is robots.txt'ed out. Here's more information explaining our stance on buying and selling links that pass PageRank:

February 2003: Google's official quality guidelines have advised "Don't participate in link schemes designed to increase your site's ranking or PageRank" for several years.

September 2005: I posted on my blog about text links and PageRank.

December 2005: Another post on my blog discussed this issue, and said

Many people who work on ranking at search engines think that selling links can lower the quality of links on the web. If you want to buy or sell a link purely for visitors or traffic and not for search engines, a simple method exists to do so (the nofollow attribute). Google’s stance on selling links is pretty clear and we’re pretty accurate at spotting them, both algorithmically and manually. Sites that sell links can lose their trust in search engines.

September 2006: In an interview with John Battelle, I noted that "Google does consider it a violation of our quality guidelines to sell links that affect search engines."

January 2007: I posted on my blog to remind people that "links in those paid-for posts should be made in a way that doesn’t affect search engines."

April 2007: We provided a mechanism for people to report paid links to Google.

June 2007: I addressed paid links in my keynote discussion during the Search Marketing Expo (SMX) conference in Seattle. Here's a video excerpt from the keynote discussion. It's less than a minute long, but highlights that Google is willing to use both algorithmic and manual detection of paid links that violate our quality guidelines, and that we are willing to take stronger action on such links in the future.

June 2007: A post on the official Google Webmaster Blog noted that "Buying or selling links to manipulate results and deceive search engines violates our guidelines." The post also introduced a new official form in Google's webmaster console so that people could report buying or selling of links.

June 2007: Google added more specific guidance to our official webmaster documentation about how to report buying or selling links and what sort of link schemes violate our quality guidelines.

August 2007: I described Google's official position on buying and selling links in a panel dedicated to paid links at the Search Engine Strategies (SES) conference in San Jose.

September 2007: In a post on my blog recapping the SES San Jose conference, I also made my presentation available to the general public (PowerPoint link).

October 2007: Google provided comments for a Forbes article titled "Google Purges the Payola".

October 2007: Google officially confirmed to Search Engine Land that we were taking stronger action on this issue, including decreasing the toolbar PageRank of sites selling links that pass PageRank.

October 2007: An email that I sent to Search Engine Journal also made it clear that Google was taking stronger action on buying/selling links that pass PageRank.

We appreciate the feedback that we've received on this issue. A few of the more prevalent questions:

Q: Is buying or selling links that pass PageRank a violation of Google's guidelines? Why?
A: Yes, it is, for the reasons we mentioned above. I also recently did a post on my personal blog that walks through an example of why search engines wouldn't want to count such links. On a serious medical subject (brain tumors), we highlighted people being paid to write about a brain tumor treatment when they hadn't been aware of the treatment before, and we saw several cases where people didn't do basic research (or even spellchecking!) before writing paid posts.

Q: Is this a Google-only issue?
A: No. All the major search engines have opposed buying and selling links that affect search engines. For the Forbes article Google Purges The Payola, Andy Greenberg asked other search engines about their policies, and the results were unanimous. From the story:

Search engines hate this kind of paid-for popularity. Google's Webmaster guidelines ban buying links just to pump search rankings. Other search engines including Ask, MSN, and Yahoo!, which mimic Google's link-based search rankings, also discourage buying and selling links.

Other engines have also commented about this individually, e.g. a search engine representative from Microsoft commented in a recent interview and said

The reality is that most paid links are a.) obviously not objective and b.) very often irrelevant. If you are asking about those then the answer is absolutely there is a risk. We will not tolerate bogus links that add little value to the user experience and are effectively trying to game the system.

Q: Is that why we've seen some sites that sell links receive lower PageRank in the Google toolbar?
A: Yes. If a site is selling links, that can affect our opinion about the value of that site or cause us to lose trust in that site.

Q: What recourse does a site owner have if their site was selling links that pass PageRank, and the site's PageRank in the Google toolbar was lowered?
A: The site owner can address the violations of the webmaster guidelines and submit a reconsideration request in Google's Webmaster Central console. Before doing a reconsideration request, please make sure that all sold links either do not pass PageRank or are removed.

Q: Is Google trying to tell webmasters how to run their own site?
A: No. We're giving advice to webmasters who want to do well in Google. As I said in this video from my keynote discussion in June 2007, webmasters are welcome to make their sites however they like, but Google in turn reserves the right to protect the quality and relevance of our index. To the best of our knowledge, all the major search engines have adopted similar positions.

Q: Is Google trying to crack down on other forms of advertisements used to drive traffic?
A: No, not at all. Our webmaster guidelines clearly state that you can use links as means to get targeted traffic. In fact, in the presentation I did in August 2007, I specifically called out several examples of non-Google advertising that are completely within our guidelines. We just want disclosure to search engines of paid links so that the paid links won't affect search engines.

Q: I'm aware of a site that appears to be buying/selling links. How can I get that information to Google?
A: Read our official blog post about how to report paid links from earlier in 2007. We've received thousands and thousands of reports in just a few months, but we welcome more reports. We appreciate the feedback, because it helps us take direct action as well as improve our existing algorithmic detection. We also use that data to train new algorithms for paid links that violate our quality guidelines.

Q: Can I get more information?
A: Sure. I wrote more answers about paid links earlier this year if you'd like to read them. And if you still have questions, you can join the discussion in our Webmaster Help Group.

What is a canonical page?

Search engines can use the Sitemap as a reference when choosing canonical URLs on your site.
The word “canonical” simply means “preferred” in this case. Picking a preferred (canonical) URL becomes necessary when search engines see duplicate pages on your site.

So, as they don’t want any duplicates in the search results, search engines use a special algorithm to identify duplicate pages and pick just one URL to represent the group in the search results. Other webpages just get filtered out.

What is a canonical page?

A canonical page is the preferred version of a set of pages with highly similar content.

Why specify a canonical page?

It's common for a site to have several pages listing the same set of products. For example, one page might display products sorted in alphabetical order, while other pages display the same products listed by price or by rating. For example:
http://www.example.com/product.php?item=swedish-fish&trackingid=1234567&sort=alpha&sessionid=5678asfasdfasfd
http://www.example.com/product.php?item=swedish-fish&trackingid=1234567&sort=price&sessionid=5678asfasdfasfd
If Google knows that these pages have the same content, we may index only one version for our search results. Our algorithms select the page we think best answers the user's query. Now, however, users can specify a canonical page to search engines by adding a <link> element with the attribute rel="canonical" to the <head> section of the non-canonical version of the page. Adding this link and attribute lets site owners identify sets of identical content and suggest to Google: "Of all these pages with identical content, this page is the most useful. Please prioritize it in search results."

How do I specify a canonical URL?

You can specify a canonical URL in two ways:
  • Add a rel="canonical" link to the <head> section of the non-canonical version of each HTML page. To specify a canonical link to the page http://www.example.com/product.php?item=swedish-fish, create a <link> element as follows:
    <link rel="canonical" href="http://www.example.com/product.php?item=swedish-fish"/>
    Copy this link into the <head> section of all non-canonical versions of the page, such as http://www.example.com/product.php?item=swedish-fish&sort=price.
    If you publish content on both http://www.example.com/product.php?item=swedish-fish and https://www.example.com/product.php?item=swedish-fish, you can specify the canonical version of the page. Create the <link> element:
    <link rel="canonical" href="http://www.example.com/product.php?item=swedish-fish"/>
    Add this link to the <head> section of https://www.example.com/product.php?item=swedish-fish.
  • Indicate the canonical version of a URL by responding with the Link rel="canonical" HTTP header. Adding rel="canonical" to the head section of a page is useful for HTML content, but it can't be used for PDFs and other file types indexed by Google Web Search. In these cases you can indicate a canonical URL by responding with the Link rel="canonical" HTTP header, like this (note that to use this option, you'll need to be able to configure your server):
    Link: <http://www.example.com/downloads/white-paper.pdf>; rel="canonical"
        
    Google currently supports these link header elements for Web Search only.

Is rel="canonical" a suggestion or a directive?

This new option lets site owners suggest the version of a page that Google should treat as canonical. Google will take this into account, in conjunction with other signals, when determining which URL sets contain identical content, and calculating the most relevant of these pages to display in search results.

Can the link be relative or absolute?

rel="canonical" can be used with relative or absolute links, but we recommend using absolute links to minimize potential confusion or difficulties. If your document specifies a base link, any relative links will be relative to that base link.

Must the content on a set of pages be similar to the content on the canonical version?

Yes. The rel="canonical" attribute should be used only to specify the preferred version of many pages with identical content (although minor differences, such as sort order, are okay).
For instance, if a site has a set of pages for the same model of dance shoe, each varying only by the color of the shoe pictured, it may make sense to set the page highlighting the most popular color as the canonical version so that Google may be more likely to show that page in search results. However, rel="canonical" would not be appropriate if that same site simply wanted a gel insole page to rank higher than the shoe page.

What happens if rel="canonical" points to a non-existent page? Or if more than one page in a set is specified as the canonical version?

We'll do our best to algorithmically determine an appropriate canonical page, just as we've done in the past.

Can Google follow a chain of rel="canonical" designations?

Yes, to some extent, but to ensure optimal canonicalization, we strongly recommend that you update links to point to a single canonical page.

Can rel="canonical" be used to suggest a canonical URL on a completely different domain?

There are situations where it's not easily possible to set up redirects. This could be the case when you need to migrate to a new domain name using a web server that cannot create server-side redirects. In this case, you can use the rel="canonical" link element to specify the exact URL of the domain preferred for indexing. While the rel="canonical" link element is seen as a hint and not an absolute directive, we do try to follow it where possible.

source: google

Getting Site Architecture Right


There are many ways to organize pages on a site. Unfortunately, some common techniques of organizing information can also harm your SEO strategy.
Sites organized by a hierarchy determined without reference to SEO might not be ideal because the site architecture is unlikely to emphasize links to information a searcher finds most relevant. An example would be burying high-value keyword pages deep within a sites structure, as opposed to hear the top, simply because those pages don't fit easily within a "home", "about us", contact" hierarchy.
In this article, we’ll look at ways to align your site architecture with search visitor demand.

Start By Building A Lexicon

Optimal site architecture for SEO is architecture based around language visitors use. Begin with keyword research.
Before running a keyword mining tool, make a list of the top ten competitor sites that are currently ranking well in your niche and evaluate them in terms of language. What phrases are common? What questions are posed? What answers are given, and how are the answers phrased? What phrases/topics are given the most weighting? What phrases/topics are given the least weighting?
You’ll start to notice patterns, but for more detailed analysis, dump the phrases and concepts into a spreadsheet, which will help you determine frequency.
Once you’ve discovered key concepts, phrases and themes, run them through a keyword research tool to find synonyms and the related concepts your competitors may have missed.
One useful, free tool that can group keyword concepts is the Google Adwords Editor. User the grouper function - described here in "How To Organize Keywords" to "generate common terms" option to automatically create keyword groupings.
Another is the Google Contextual Targeting Tool.
Look at your own site logs for past search activity. Trawl through related news sites, Facebook groups, industry publications and forums. Build up a lexicon of phrases that your target visitors use.
Then use visitor language as the basis of your site hierarchy.

Site Structure Based On Visitor Language

Group the main concepts and keywords into thematic units.
For example, a site about fruit might be broken down into key thematic units such as “apple”, “pear”, “orange”, “banana” and so on.
Link each thematic unit down to sub themes i.e. for “oranges”, the next theme could include links to pages such as “health benefits of oranges”, “recipes using oranges”, etc, depending on the specific terms you’re targeting. In this way, you integrate keyword terms with your site architecture.
Here’s an example in the wild:

The product listing by category navigation down the left-hand side is likely based on keywords. If we click on, say, the “Medical Liability Insurance” link, we see a group of keyword-loaded navigation links that relate specifically to that category.

Evidence Based Navigation

A site might be about “cape cod real estate”. If I run this term through a keyword research tool, in this case Google Keywords, a few conceptual patterns present themselves i.e people search mainly by either geographic location i.e. Edgartown, Provincetown, Chatham, etc or accommodation type i.e. rentals, commercial, waterfront, etc.

Makes sense, of course.
But notice what isn’t there?
For one thing, real estate searches by price. Yet, some real estate sites give away valuable navigation linkage to a price-based navigation hierarchy.
This is not to say a search function ordered by house value isn’t important, but ordering site information by house value isn’t necessarily a good basis for seo-friendly site architecture. This functionality could be integrated into a search tool, instead.
A good idea, in terms of aligning site architecture with SEO imperatives, would be to organise such a site by geographic location and/or accommodation type as this matches the interests of search visitors. The site is made more relevant to search visitors than would otherwise be the case

Integrate Site Navigation Everywhere

Site navigation typically involves concepts such as “home”, “about”, “contact”, “products” i.e. a few high-level tabs or buttons that separate information by function.
There’s nothing wrong with this approach, but the navigation concept for SEO purposes can be significantly widened by playing to the webs core strengths. Tim Berners Lee placed links at the heart of the web as links were the means to navigate from one related document to another. Links are still the webs most common navigational tool.
“Navigational” links should appear throughout your copy. If people are reading your copy, and the topic is not quite what they want, they will either click back, or - if you’ve been paying close attention to previous visitor behaviour - will click on a link within your copy to another area of your site.
The body text on every page on your site is an opportunity to integrate specific, keyword-loaded navigation. As a bonus, this may encourage higher levels of click-thru, as opposed to click-back, pass link juice to sub-pages and ensure no page on your site is orphaned.

Using Site Architecture To Defeat Panda & Penguin

These two animals have a world of connotations, many of them unpleasant.
Update Panda was an update partly focused on user-experience. Google is likely using interaction metrics, and if Google isn’t seeing what they deem to be positive visitor interaction, then your pages, or site, will likely take a hit.
What metrics are Google likely to be looking at? Bounce backs, for one. This is why relevance is critical. The more you know about your customers, and the more relevant link options you can give them to click deeper into your site, rather than click-back to the search results, the more likely you are to avoid being Panda-ized.
If you’ve got pages in your hierarchy that users don’t consider to be particularly relevant, either beef them up or remove them.
Update Penguin was largely driven by anchor text. If you use similar anchor text keywords pointing to one page, Penquin is likely to cause you grief. This can even happen if you’re mixing up keywords i.e. “cape cod houses”, “cape cod real estate”, “cape cod accommodation”. That level of keyword diversity may have been acceptable in the past, but it’s not now.
Make links specific, and link it to specific, unique pages. Get rid of duplicate, or near duplicate pages. Each page should be unique, not just in terms of keywords used, but in terms of concept.
In a post-Panda/Penquin world, webmasters must have razor-sharp focus on what information searchers find most relevant. Being close, but not quite what the visitor wanted, is an invitation for Google to sink you.
Build relevance into your information architecture.

source: seobook

Five Steps to SEO-Friendly Site URL Structure


1. Consolidate your www and the non-www domain versions

As a rule, there are two major versions of your domain indexed in the search engines, the www and the non-www version of it. These can be consolidated in more than one way, but I’d mention the most widely accepted practice.
Most SEOs (in my experience) use the 301 redirect to point one version of their site to the other (or vice versa).
Alternatively (for instance, when you can’t do a redirect), you can specify your preferred version in Google Webmaster Tools in Configuration >> Settings >> Preferred Domain. However,this has certain drawbacks:
  • This takes care of Google only.
  • This option is restricted to root domains only. If you have an example.wordpress.com site, this method is not for you.
But why worry about the www vs non-www issue in the first place? Thing is, some of your backlinks may be pointing to your www version, while some could be going to the non-www version.
So, to make sure that both versions’ SEO value is consolidated, it’s better to explicitly establish this link between the two (either via the 301 redirect, or in Google Webmaster Tools, or by using a canonical tag – I’ll talk about that one a bit further).

2. Avoid dynamic and relative URLs

Depending on your content management system, the URLs it generates may be “pretty” like this one:
www.example.com/topic-name
or “ugly” like this one:
www.example.com/?p=578544
As I said earlier, search engines have no problem with either variant, but for certain reasons it’s better to use static (prettier) URLs rather than dynamic (uglier) ones. Thing is, static URLs contain your keywords and are more user-friendly, since one can figure out what the page is about just by looking at the static URL’s name.
Besides, Google recommends using hyphens (-) instead of underscores (_) in URL names, since a phrase in which the words are connected using underscores is treated by Google as one single word, e.g. one_single_word is onesingleword to Google.
And, to check what other elements of your page should have the same keywords as your URLs, have a look at the screenshot 3 of the “On-Page SEO for 2013: Optimize Pages to Rank and Perform” guide that we released recently.
Besides, some web devs make use of relative URLs. The problem with relative URLs is that they are dependent on the context in which they occur. Once the context changes, the URL may not work. SEO-wise, it is better to use absolute URLs instead of relative ones, since the former are what search engines prefer.
Now, sometimes different parameters can be added to the URL for analytics tracking or other reasons (such as sid, utm, etc.) To make sure that these parameters don’t make the number of URLs with duplicate content grow over the top, you can do either of the following:
  • Ask Google to disregard certain URL parameters in Google Webmaster Tools in Configuration > URL Parameters.
  • See if your content management system allows you to solidify URLs with additional parameters with their shorter counterparts.

3. Create an XML Sitemap

An XML Sitemap is not to be confused with the HTML sitemap. The former is for the search engines, while the latter is mostly designed for human users.
What is an XML Sitemap? In plain words, it’s a list of your site’s URLs that you submit to the search engines. This serves two purposes:
  1. This helps search engines find your site’s pages more easily;
  2. Search engines can use the Sitemap as a reference when choosing canonical URLs on your site.
The word “canonical” simply means “preferred” in this case. Picking a preferred (canonical) URL becomes necessary when search engines see duplicate pages on your site.
So, as they don’t want any duplicates in the search results, search engines use a special algorithm to identify duplicate pages and pick just one URL to represent the group in the search results. Other webpages just get filtered out.
Now, back to sitemaps … One of the criteria search engines may use to pick a canonical URL for the group of webpages is whether this URL is mentioned in the website’s Sitemap.
So, what webpages should be included into your sitemap, all of your site’s pages or not? In fact, for SEO-reasons, it’s recommended to include only the webpages you’d like to show up in search.

4. Close off irrelevant pages with robots.txt

There may be pages on your site that should be concealed from the search engines. These could be your “Terms and conditions” page, pages with sensitive information, etc. It’s better not to let these get indexed, since they usually don’t contain your target keywords and only dilute the semantic whole of your site.
The robotx.txt file contains instructions for the search engines as to what pages of your site should be ignored during the crawl. Such pages get a noindex attribute and do not show up in the search results.
Sometimes, however, unsavvy webmasters use noindex on the pages it should not be used. Hence, whenever you start doing SEO for a site, it is important to make sure that no pages that should be ranking in search have the noindex attribute. Or else you may end up like this guy here:

5. Specify canonical URLs using a special tag

Another way to highlight canonical URLs on your site is by using the so-called canonical tag. In geek speek, it’s not the tag itself that is canonical, but the tag’s parameter, but we’ll just call it the canonical tag by metonymy.
Note: the canonical tag should be applied only with the purpose of helping search engines decide on your canonical ULR. For redirection of site pages, use redirects. And, for paginated content, it makes sense to employ rel=”next” and rel=”prev” tags in most cases.
For example, on Macy’s website, I can go to the Quilts & Bedspreads page directly, or I can take different routes from the homepage:
  • I can go to Homepage >>Bed& Bath >> Quilts & Bedspreads. The following URL with my pass recorded is generated:
http://www1.macys.com/shop/bed-bath/quilts-bedspreads?id=22748&edge=hybrid&cm_sp=us_catsplash_bed-%26-bath-_-row6-_-quilts-%26-bedspreads
  • Or I can go to Homepage >> For the Home >> Bed & Bath >> Bedding >> Quilts & Bedspreads. The following URL is generated:
http://www1.macys.com/shop/bed-bath/quilts-bedspreads?id=22748&edge=hybrid
Now, all three URLs lead to the same content. And, if you look into the code of each page, you’ll see the following tag in the head element:
Screen3 (SEJ)
As you see, for each of these URLs, a canonical URL is specified, which is the cleanest version of all the URLs in the group:
http://www1.macys.com/shop/bed-bath/quilts-bedspreads?id=22748
What this does is, it funnels down the SEO value each of these three URLs might have to one single URL that should be displayed in the search results (the canonical URL). Normally search engines do a pretty good job identifying canonical URLs themselves, but, as Susan Moskwa once wrote at Google Webmaster Central:
“If we aren’t able to detect all the duplicates of a particular page, we won’t be able to consolidate all of their properties. This may dilute the strength of that content’s ranking signals by splitting them across multiple URLs.”

source: searchenginejournal

Job responsibilities of SEO Analyst


Job Description:

  • Keyword research
  • Data mine web analytics to develop new keywords.
  • Collaborate with SEM counterpart to identify leverage conversion data.
  • Competitor site analysis
  • Collaborate with writers to develop keyword targeted content.
  • Collaborate with webmaster to ensure SEO friendly site architecture.
  • On-page SEO optimization (Title tag, description, H1, H2, keyword density, internal links, page size, XHTML compliance, W3C compliance)
  • On-page conversion optimization using Multi-variate and A-B testing (user Google optimizer to improve landing page)
  • Off-page optimization=Link building (Select target URLs, link text,
  • Link building through multiple channels and link purchasing through brokers and direct contacts (must be able to differentiate a quality site vs a spam site).
  • Reporting on traffic, rankings, conversions and link building.
  • ROI reporting
  • Expert at Google Analytics and Google Website Optimizer
  • Very thorough knowledge of XHTML, DHTML, CSS, Dreamweaver.
  • Expert at using Keyword research tool like: Wordtracker, Google keyword tool and Keyword discovery
  • Expert at analysis, optimization and submission tool like Webposition Gold.
  • Knowledge on W3C and XHTML compliance
  • Attention to detail and Team player