Contact No- +91 9990060202 +91 9211656329 Email- mukeshdoriwal@gmail.com

SEO Tips tricks

Google Search Algorithm Update Yesterday

Google Update Brewing it looks like there is calculation overhauls occurring at Google seek now - it is difficult to say in the event that it is Panda, Penguin or something random - at this time, it doesn't appear like Penguin however it may be Panda related. Google has not affirmed if there was a redesign yet it does appear to be something major touched down in the Google list items yesterday.


We have a great deal of discourse and jabber around it at Webmaster world and everything except one of the mechanized following apparatuses demonstrated critical changes.


*********************************************************************

Panda 4.1 — Google’s 27th Panda Update — Is Rolling Out
Google has announced that the latest version of its Panda Update — a filter designed to penalize “thin” or poor content from ranking well — has been released.
Google said in a post on Google+ that a “slow rollout” began earlier this week and will continue into next week, before being complete. Google said that depending on location, about 3%-to-5% of search queries will be affected.
Different about this latest release? Google says it’s supposed to be more precise and will allow more high-quality small and medium-sized sites to rank better. From the post:
Based on user (and webmaster!) feedback, we’ve been able to discover a few more signals to help Panda identify low-quality content more precisely. This results in a greater diversity of high-quality small- and medium-sized sites ranking higher, which is nice.
New Chance for Some; New Penalty for Others
The rollout means anyone who was penalized by Panda in the last update has a chance to emerge, if they made the right changes. So if you were hit by Panda, made alterations to your site, you’ll know by the end of next week if those were good enough, if you see an increase in traffic.
The rollout also means that new sites not previously hit by Panda might get impacted. If you’ve seen a sudden traffic drop from Google this week, or note one in the coming days, then this latest Panda Update is likely to blame.

About That Number
Why are we calling it Panda 4.1? Well, Google itself called the last one Panda 4.0 and deemed it a major update. This isn’t as big of a change, so we’re going with Panda 4.1.
We actually prefer to number these updates in the order that they’ve happened, because trying to determine if something is a “major” or “minor” Panda Update is imprecise and lead to numbering absurdities like having a Panda 3.92 Update.
But since Google called the last one Panda 4.0, we went with that name — and we’ll continue on with the old-fashioned numbering system unless it gets absurd again.
For the record, here’s the list of confirmed Panda Updates, with some of the major changes called out with their AKA (also known as) names:

Panda Update 1, AKA
Panda 1.0, Feb. 24, 2011 (11.8% of queries; announced; English in US only)
Panda Update 2, AKA
Panda 2.0, April 11, 2011 (2% of queries; announced; rolled out in English internationally)
Panda Update 3, May 10, 2011 (no change given; confirmed, not announced)
Panda Update 4, June 16, 2011 (no change given; confirmed, not announced)
Panda Update 5, July 23, 2011 (no change given; confirmed, not announced)
Panda Update 6, Aug. 12, 2011 (6-9% of queries in many non-English languages; announced)
Panda Update 7, Sept. 28, 2011 (no change given; confirmed, not announced)
Panda Update 8 AKA
Panda 3.0, Oct. 19, 2011 (about 2% of queries; belatedly confirmed)
Panda Update 9, Nov. 18, 2011: (less than 1% of queries; announced)
Panda Update 10, Jan. 18, 2012 (no change given; confirmed, not announced)
Panda Update 11, Feb. 27, 2012 (no change given; announced)
Panda Update 12, March 23, 2012 (about 1.6% of queries impacted; announced)
Panda Update 13, April 19, 2012 (no change given; belatedly revealed)
Panda Update 14, April 27, 2012: (no change given; confirmed; first update within days of another)
Panda Update 15, June 9, 2012: (1% of queries; belatedly announced)
Panda Update 16, June 25, 2012: (about 1% of queries; announced)
Panda Update 17, July 24, 2012:(about 1% of queries; announced)
Panda Update 18, Aug. 20, 2012: (about 1% of queries; belatedly announced)
Panda Update 19, Sept. 18, 2012: (less than 0.7% of queries; announced)
Panda Update 20, Sept. 27, 2012 (2.4% English queries, impacted, belatedly announced
Panda Update 21, Nov. 5, 2012 (1.1% of English-language queries in US; 0.4% worldwide; confirmed, not announced)
Panda Update 22, Nov. 21, 2012 (0.8% of English queries were affected; confirmed, not announced)
Panda Update 23, Dec. 21, 2012 (1.3% of English queries were affected; confirmed, announced)
Panda Update 24, Jan. 22, 2013 (1.2% of English queries were affected; confirmed, announced)
Panda Update 25, March 15, 2013 (confirmed as coming; not confirmed as having happened)
Panda Update 26 AKA
Panda 4.0, May 20, 2014 (7.5% of English queries were affected; confirmed, announced)
Panda Update 27 AKA
Panda 4.1, Sept. 25, 2014 (3-5% of queries were affected; confirmed, announced)
The latest update comes four months after the last, which suggests that this might be a new quarterly cycle that we’re on. Panda had been updated on a roughly monthly basis during 2012. In 2013, most of the year saw no update at all.
Of course, there could have been unannounced releases of Panda that have happened. The list above is only for those that have been confirmed by Google.


**************************************************************************

 Google Basics:

Learn how Google discovers, crawls, and serves web pages
When you sit down at your computer and do a Google search, you're almost instantly presented with a list of results from all over the web. How does Google find web pages matching your query, and determine the order of search results?

In the simplest terms, you could think of searching the web as looking in a very large book with an impressive index telling you exactly where everything is located. When you perform a Google search, our programs check our index to determine the most relevant search results to be returned ("served") to you.

The three key processes in delivering search results to you are:


Crawling: Does Google know about your site? Can we find it?
Indexing: Can Google index your site?
Serving: Does the site have good and useful content that is relevant to the user's search?
Crawling

Crawling is the process by which Googlebot discovers new and updated pages to be added to the Google index.

We use a huge set of computers to fetch (or "crawl") billions of pages on the web. The program that does the fetching is called Googlebot (also known as a robot, bot, or spider). Googlebot uses an algorithmic process: computer programs determine which sites to crawl, how often, and how many pages to fetch from each site.

Google's crawl process begins with a list of web page URLs, generated from previous crawl processes, and augmented with Sitemap data provided by webmasters. As Googlebot visits each of these websites it detects links on each page and adds them to its list of pages to crawl. New sites, changes to existing sites, and dead links are noted and used to update the Google index.

Google doesn't accept payment to crawl a site more frequently, and we keep the search side of our business separate from our revenue-generating AdWords service.

Indexing


Googlebot processes each of the pages it crawls in order to compile a massive index of all the words it sees and their location on each page. In addition, we process information included in key content tags and attributes, such as Title tags and ALT attributes. Googlebot can process many, but not all, content types. For example, we cannot process the content of some rich media files or dynamic pages.

Serving results
When a user enters a query, our machines search the index for matching pages and return the results we believe are the most relevant to the user. Relevancy is determined by over 200 factors, one of which is the PageRank for a given page. PageRank is the measure of the importance of a page based on the incoming links from other pages. In simple terms, each link to a page on your site from another site adds to your site's PageRank. Not all links are equal: Google works hard to improve the user experience by identifying spam links and other practices that negatively impact search results. The best types of links are those that are given based on the quality of your content.

In order for your site to rank well in search results pages, it's important to make sure that Google can crawl and index your site correctly. Our Webmaster Guidelines outline some best practices that can help you avoid common pitfalls and improve your site's ranking.

Google's Did you mean and Google Autocomplete features are designed to help users save time by displaying related terms, common misspellings, and popular queries. Like our google.com search results, the keywords used by these features are automatically generated by our web crawlers and search algorithms. We display these predictions only when we think they might save the user time. If a site ranks well for a keyword, it's because we've algorithmically determined that its content is more relevant to the user's query.


==========================================================

Google Penguin Update 2013, With The Penguin 2.1 Spam-Filtering Algorithm, Is Now Live

angry-penguin-200pxThe fifth confirmed release of Google’s “Penguin” spam fighting algorithm is live. That makes it Penguin 5 by our count. But since this Penguin update is using a slightly improved version of Google’s “Penguin 2″ second-generation technology, Google itself is calling it “Penguin 2.1.” Don’t worry. We’ll explain the numbering nonsense below, as well as what this all means for publishers.
New Version Of Penguin Live Today
The head of Google’s web spam team, Matt Cutts, shared the news on Twitter, saying the latest release would impact about 1 percent of all searches:
The link that Cutts points at, by the way, explains what Penguin was when it was first launched. It doesn’t cover anything new or changed with the latest release.
Previous Updates
Here are all the confirmed releases of Penguin to date:
Penguin 1 on April 24, 2012 (impacting around 3.1% of queries)
Penguin 2 on May 26, 2012 (impacting less than 0.1%)
Penguin 3 on October 5, 2012 (impacting around 0.3% of queries)
Penguin 4 (AKA Penguin 2.0) on May 22, 2013 (impacting 2.3% of queries)
Penguin 5 (AKA Penguin 2.1) on Oct. 4, 2013 (impacting around 1% of queries)
Why Penguin 2.1 AND Penguin 5?
If us talking about Penguin 5 in reference to something Google is calling Penguin 2.1 hurts your head, believe us, it hurts ours, too. But you can pin that blame back on Google. Here’s why.
When Google started releasing its “Panda” algorithm designed to fight low-quality content, it called the first one simply “Panda.” So when the second came out, people referred to that as “Panda 2.” When the third came out, people called that Panda 3 — causing Google to say that the third release, because it was relatively minor, really only should be called Panda 2.1 — the “point” being used to indicate how much a minor change it was.
Google eventually — and belatedly — indicated that a Panda 3 release happened, causing the numbering to move into Panda 3.0, Panda 3.1 and so on until there had been so many “minor” updates that we having to resort to going further out in decimal places to things like Panda 3.92.
That caused us here at Search Engine Land to decide it would be easier all around if we just numbered any confirmed update sequentially, in order of when they came. No matter how “big” or “small” an update might be, we’d just give it the next number on the list: Penguin 1, Penguin 2, Penguin 3 and so on.
Thanks For The Headache, Google
That worked out fine until Penguin 4, because Google typically didn’t give these updates numbers itself. It just said there was an update, and left it to us or others to attach a number to it.
But when Penguin 4 arrived, Google really wanted to stress that it was using what it deemed to be a major, next-generation change in how Penguin works. So, Google called it Penguin 2, despite all the references to a Penguin 2 already being out there, despite the fact it hadn’t really numbered many of these various updates before.
Today’s update, as can be seen above, has been dubbed Penguin 2.1 — so supposedly, it’s a relatively minor change to the previous Penguin filter that was being used. However, if it’s impacting around 1 percent of queries as Google says, that means it is more significant than what Google might have considered to be similar “minor” updates of Penguin 1.1 and Penguin 1.2.
What Is Penguin Again? And How Do I Deal With It?
For those new to the whole “Penguin” concept, Penguin is a part of Google’s overall search algorithm that periodically looks for sites that are deemed to be spamming Google’s search results but somehow still ranking well. In particular, it goes after sites that may have purchased paid links.
If you were hit by Penguin, you’ll likely know if you see a marked drop in traffic that begins today or tomorrow. To recover, you’ll need to do things like disavow bad links or manually have those removed. Filing a reconsideration request doesn’t help, because Penguin is an automated process. Until it sees that what it considers to be bad has been removed, you don’t recover.
If you were previously hit by Penguin and have taken actions hopefully meant to fix that, today and tomorrow are the days to watch. If you see an improvement in traffic, that’s a sign that you’ve escaped Penguin.
Here are previous articles with more on Penguin recovery and how it and other filters work as part of the ranking system
The Google Dance Is Back
Two Weeks In, Google Talks Penguin Update, Ways To Recover & Negative SEO How Google’s Disavow Links Tool Can Remove Penalties Why Asking StumbleUpon To Remove Your Links Is Dumb Google’s New Stance On Negative SEO: “Works Hard To Prevent” It Still Seeing Post-Penguin Web Spam In Google Results? Let Google Know Big Brand SEO & Penguin 2.0 Demystifying Link Disavowals, Penalties & More What About Hummingbird? If you’re wondering about how Penguin fits into that new Google Hummingbird algorithm you may have heard about, think of Penguin as a part of Hummingbird, not as a replacement for it. Hummingbird is like Google’s entire ranking engine, whereas Penguin is like a small part of that engine, a filter that is removed and periodically replaced with what Google considers to be a better filter to help keep out bad stuff. Source:http://searchengineland.com/penguin-2-1-and-5-live-173632 ==================================================================================================

Google Hummingbird Algorithm

Google has a new search algorithm, the system it uses to sort through all the information it has when you search and come back with answers. It’s called “Hummingbird” and below, what we know about it so far.
What’s a “search algorithm?”
That’s a technical term for what you can think of as a recipe that Google uses to sort through the billions of web pages and other information it has, in order to return what it believes are the best answers.
What’s “Hummingbird?”
It’s the name of the new search algorithm that Google is using, one that Google says should return better results.
So that “PageRank” algorithm is dead?
No. PageRank is one of over 200 major “ingredients” that go into the Hummingbird recipe. Hummingbird looks at PageRank — how important links to a page are deemed to be — along with other factors like whether Google believes a page is of good quality, the words used on it and many other things (see our Periodic Table Of SEO Success Factors for a better sense of some of these).
Why is it called Hummingbird?
Google told us the name come from being “precise and fast.”
When did Hummingbird start? Today?
Google started using Hummingbird about a month ago, it said. Google only announced the change today.
What does it mean that Hummingbird is now being used?
Think of a car built in the 1950s. It might have a great engine, but it might also be an engine that lacks things like fuel injection or be unable to use unleaded fuel. When Google switched to Hummingbird, it’s as if it dropped the old engine out of a car and put in a new one. It also did this so quickly that no one really noticed the switch.
When’s the last time Google replaced its algorithm this way?
Google struggled to recall when any type of major change like this last happened. In 2010, the “Caffeine Update” was a huge change. But that was also a change mostly meant to help Google better gather information (indexing) rather than sorting through the information. Google search chief Amit Singhal told me that perhaps 2001, when he first joined the company, was the last time the algorithm was so dramatically rewritten.
What about all these Penguin, Panda and other “updates” — haven’t those been changes to the algorithm?
Panda, Penguin and other updates were changes to parts of the old algorithm, but not an entire replacement of the whole. Think of it again like an engine. Those things were as if the engine received a new oil filter or had an improved pump put in. Hummingbird is a brand new engine, though it continues to use some of the same parts of the old, like Penguin and Panda
The new engine is using old parts?
Yes. And no. Some of the parts are perfectly good, so there was no reason to toss them out. Other parts are constantly being replaced. In general, Hummingbird — Google says — is a new engine built on both existing and new parts, organized in a way to especially serve the search demands of today, rather than one created for the needs of ten years ago, with the technologies back then.
What type of “new” search activity does Hummingbird help?
“Conversational search” is one of the biggest examples Google gave. People, when speaking searches, may find it more useful to have a conversation.
“What’s the closest place to buy the iPhone 5s to my home?” A traditional search engine might focus on finding matches for words — finding a page that says “buy” and “iPhone 5s,” for example.
Hummingbird should better focus on the meaning behind the words. It may better understand the actual location of your home, if you’ve shared that with Google. It might understand that “place” means you want a brick-and-mortar store. It might get that “iPhone 5s” is a particular type of electronic device carried by certain stores. Knowing all these meanings may help Google go beyond just finding pages with matching words.
In particular, Google said that Hummingbird is paying more attention to each word in a query, ensuring that the whole query — the whole sentence or conversation or meaning — is taken into account, rather than particular words. The goal is that pages matching the meaning do better, rather than pages matching just a few words.
I thought Google did this conversational search stuff already!
It does (see Google’s Impressive “Conversational Search” Goes Live On Chrome), but it had only been doing it really within its Knowledge Graph answers. Hummingbird is designed to apply the meaning technology to billions of pages from across the web, in addition to Knowledge Graph facts, which may bring back better results.
Does it really work? Any before-and-afters?
We don’t know. There’s no way to do a “before-and-after” ourselves, now. Pretty much, we only have Google’s word that Hummingbird is improving things. However, Google did offer some before-and-after examples of its own, that it says shows Hummingbird improvements.
A search for “acid reflux prescription” used to list a lot of drugs (such as this, Google said), which might not be necessarily be the best way to treat the disease. Now, Google says results have information about treatment in general, including whether you even need drugs, such as this as one of the listings.
A search for “pay your bills through citizens bank and trust bank” used to bring up the home page for Citizens Bank but now should return the specific page about paying bills
A search for “pizza hut calories per slice” used to list an answer like this, Google said, but not one from Pizza Hut. Now, it lists this answer directly from Pizza Hut itself, Google says.
Could it be making Google worse?
Almost certainly not. While we can’t say that Google’s gotten better, we do know that Hummingbird — if it has indeed been used for the past month — hasn’t sparked any wave of consumers complaining that Google’s results suddenly got bad. People complain when things get worse; they generally don’t notice when things improve.
Does this mean SEO is dead?
No, SEO is not yet again dead. In fact, Google’s saying there’s nothing new or different SEOs or publishers need to worry about. Guidance remains the same, it says: have original, high-quality content. Signals that have been important in the past remain important; Hummingbird just allows Google to process them in new and hopefully better ways.
Does this mean I’m going to lose traffic from Google?
If you haven’t in the past month, well, you came through Hummingbird unscathed. After all, it went live about a month ago. If you were going to have problems with it, you would have known by now.
By and large, there’s been no major outcry among publishers that they’ve lost rankings. This seems to support Google saying this is very much a query-by-query effect, one that may improve particular searches — particularly complex ones — rather than something that hits “head” terms that can, in turn, cause major traffic shifts.
But I did lose traffic!
Perhaps it was due to Hummingbird, but Google stressed that it could also be due to some of the other parts of its algorithm, which are always being changed, tweaked or improved. There’s no way to know.
How do you know all this stuff?
Google shared some of it at its press event today, and then I talked with two of Google’s top search execs, Amit Singhal and Ben Gomes, after the event for more details. I also hope to do a more formal look at the changes from those conversations in the near future. But for now, hopefully you’ve found this quick FAQ based on those conversations to be helpful.
By the way, another term for the “meaning” connections that Hummingbird does is “entity search,” and we have an entire panel on that at our SMX East search marketing show in New York City, next week. The Coming “Entity Search” Revolution session is part of an entire “Semantic Search” track that also gets into ways search engines are discovering meanings behind words. Learn more about the track and the entire show on the agenda page.
==================================================================================================

How To Redirect Mobile Users On Your Website

when someone from a mobile device visits a website, i want to redirect that person to a mobile website. Although our mobile platform does this natively, we thought it would be useful to share the redirection code.
It is best if this code is placed between and so it can be recognized first and redirect any mobile users quickly:
What this code does is it says, "If the screen size of the device viewing this website is less than 800 pixels wide, then redirect them to http://yourdomain.com/mobile/" (replace this with anything).
This can also be used to redirect mobile visitors to a YouTube video, a specific page on your website, or something else. A specific page on your website may be best (so they can still click around and access other parts of your website).
Update:
Since the sizes of mobile screens vary so much, it is best to detect by user agent. We suggest following these steps to set up more accurate mobile redirection:
1. Download the Javascript version from http://detectmobilebrowsers.com and save to desired folder (eg. /javascript). Let's name it redirect.js.
2. Edit redirect.js, find and change http://detectmobilebrowser.com/mobile to you mobile URL.
3. In your website header between and , add within head tags.
Source: http://notixtech.com/blog/how-redirect-mobile-users-your-website
====================================================================

Pagination and SEO

Sites paginate content in various ways. For example:
  • News and/or publishing sites often divide a long article into several shorter pages.
  • Retail sites may divide the list of items in a large product category into multiple pages.
  • Discussion forums often break threads into sequential URLs.
If you paginate content on your site, and you want that content to appear in search results, we recommend one of the following three options.
  • Do nothing. Paginated content is very common, and Google does a good job returning the most relevant results to users, regardless of whether content is divided into multiple pages.
  • Specify a View All page. Searchers commonly prefer to view a whole article or category on a single page. Therefore, if we think this is what the searcher is looking for, we try to show the View All page in search results. You can also add arel="canonical" link to the component pages to tell Google that the View All version is the version you want to appear in search results.
  • Use rel="next" and rel="prev" links to indicate the relationship between component URLs. This markup provides a strong hint to Google that you would like us to treat these pages as a logical sequence, thus consolidating their linking properties and usually sending searchers to the first page.

Using rel="next" and rel="prev"

You can use the HTML attributes rel="next" and rel="prev" to indicate the relationship between individual URLs. Using these attributes is a strong hint to Google that you want us to treat these pages as a logical sequence.
Let's say you have content paginated into the following URLs:
http://www.example.com/article-part1.html
http://www.example.com/article-part2.html
http://www.example.com/article-part3.html
http://www.example.com/article-part4.html
  1. In the <head> section of the first page (http://www.example.com/article-part1.html), add a link tag pointing to the next page in the sequence, like this:
    <link rel="next" href="http://www.example.com/article-part2.html">
    Because this is the first URL in the sequence, there’s no need to add markup forrel="prev".
  2. On the second and third pages, add links pointing to the previous and next URLs in the sequence. For example, you could add the following to the second page of the sequence:
    <link rel="prev" href="http://www.example.com/article-part1.html">
    <link rel="next" href="http://www.example.com/article-part3.html">
    
  3. On the final page of the sequence (http://www.example.com/article-part4.html>), add a link pointing to the previous URL, like this:
    <link rel="prev" href="http://www.example.com/article-part3.html">
    Because this is the final URL in the sequence, there’s no need to add arel="next" link.
"Google treats rel="previous" as a syntactic variant of rel="prev". Values can be either relative or absolute URLs (as allowed by the <link> tag). And, if you include a <base> link in your document, relative paths will resolve according to the base URL.
Some things to note:
  • rel="prev" and rel="next" act as hints to Google, not absolute directives.

  • If a component page within a series includes parameters that don't change the page's content, such as session IDs, then the rel="prev" and rel="next" values should also contain the same parameters. This helps our linking process better match corresponding rel="prev" and rel="next" values. For example, the page http://www.example.com/article?story=abc&page=2&sessionid=123 should contain the following:
    <link rel="prev" href="http://www.example.com/article?story=abc&page=1&sessionid=123" />
    <link rel="next" href="http://www.example.com/article?story=abc&page=3&sessionid=123" />
    

  • rel="next" and rel="prev" are orthogonal concepts to rel="canonical". You can include both declarations. For example, http://www.example.com/article?story=abc&page=2&sessionid=123 may contain:
    <link rel="canonical" href="http://www.example.com/article?story=abc&page=2"/>
    <link rel="prev" href="http://www.example.com/article?story=abc&page=1&sessionid=123" />
    <link rel="next" href="http://www.example.com/article?story=abc&page=3&sessionid=123" />
    

  • If Google finds mistakes in your implementation (for example, if an expectedrel="prev" or rel="next" designation is missing), we'll continue to index the page(s), and rely on our own heuristics to understand your content.

==========================

6 Tips To Optimize Images For SEO To Increase Traffic

We all know that we should optimize our blog posts to rank well in search engine that help us to increase our blog traffic but today I will tell you how to use images for better search engine visibility. Every blogger either using blogger platform or wordpress platform, they must optimize their images while using them into their blog posts before publishing the post. Image optimization really works to improve blog traffic. Lots of webmasters search images from Google Image Search to use them into their blog posts. So, if you have well optimized images into your blog post then it will give you two benefits. One is bring traffic from image search and second is it help web crawlers to better understand your post topic hence higher SERP (Search Engine Result Position). Let's go to the tutorial to learn some helpful tips about image optimization for SEO.


Image Optimization Tips:
Below are some very important tips to optimize images before uploading them into blog posts. These tips will improve your blog SEO


Use Keywords in Image Name:
Have you download an image from internet or click them from your camera to use them into your blog posts? Yes! Wait buddy don't use them directly into your blog. First change their name into a keyword rich image name. By default image name look like this image1432.jpg or photo123.jpg. Always rename them with your post keyword. For example if I use any image for this post then I'll rename my image like this image-optimization.jpg.


Hyphen in Image Name:
Remember when naming your images with your keywords neither use blank space nor underscore ( _ ) between them. Always use hyphen to separate the keywords like I have done in the first example. Because when your use blank space it will automatically convert to '%20' which have no meaning. In result web crawlers found difficulty to understand the image. Hyphen is always a better option for optimizing the image.


Use Best Format:
There are lot of formats of images like JPG, PNG, GIF and others. But always use that image format which suits better for web. JPG image format is the most preferable image format by the webmasters because it uses less size memory as compare to other formats. Less image size will help you to reduce blog load time and hence better search engine ranking means more traffic.


Compress Images:
Before uploading the images into your blog post, first remember that you have compressed them. Compressing the image also help to make your blog loading speed faster and reduce blog bounce rate. You can compress your images in Photoshop. To compress your image in Photoshop go to file menu and click on save for web. It will reduce image size by maintaining the image quality.
Also you can use online tools to compress your image files. One of the best tool is Yahoo Smush it. Go for it and compress your images for free.


How to Use Images in Blog Post for Better Search Engine Visibility?
Now you have optimized your images. It's time to use them into your blog posts. Below are some advance tips that also come under image optimization in SEO. Read them carefully and follow the instructions.


Proper Dimension:
When you upload your images into blog post, always give them proper dimensions. Mentioning the dimensions help in faster page loading. Web crawlers don't get confuse to assume its width and height. If you already give dimensions to your image, then it will be easier for web crawlers to better crawl your content. Use below format to give dimensions to your image.

<img width="320" height="210" src="URL of image"/>

Change above highlighted values according to your need.


Title Tag and Alt Tag:
Title tag and Alt tag works as a description for your images. Web crawlers can't read images. They only understand them by their name, title tag and Alt tag. Always choose your best keyword to give title and alt tag for your image. To use these tags see the below example.

<img src="Image URL" title="Keyword" alt="Keyword"/>

Note: Don't add too many keywords in these tags. Only put your best keyword there which better explain your post topic.
I have write a detail tutorial to add title and alt attribute in images for Blogger.com users. Please check the below post.

Source: http://www.bloggertipstricks.com/2013/02/optimize-images-seo-increase-blog-traffic.html
================================================================

How To Reduce Blog Bounce Rate?

If your blog posts are not sticky to your first time readers, then surely bounce rate of your blog will increase. Bounce rate is the biggest enemy for your blog which can ruin all its ranking from top to bottom. All major search engines specially Google hate those blogs that have too much bounce rate. It shows the weakness of a blog. If you want your blogging business at peak then start working today to reduce your blog load time. In this article I will tell you 9 best ways that you can follow to overcome this issue. You can easily won by this enemy with little efforts and can be a successful blogger to achieve your goals. Let's go to the tutorial.

What is Bounce Rate?
Bounce rate is simply the percentage of visitors who just enter into your blog from any source like from search engines or from any referral site, and go back without clicking on a single link. In other words if you open a blog to check its latest updated posts and you found that there is no update then surely you will go back. This will give that blog Bounce Rate.


How to Reduce Blog Bounce Rate?
There are some common mistakes that every newbie blogger did that increase their blog's bounce rate. Let study about them and I will tell you how you can overcome from these mistakes.


Quality Content:
There is a well known phrase in blogging that content is king. Yes content is always king. If you are not providing best written and unique content then why they visit your blog. Means more bounce rate. So, try to generate best quality content for your readers that will help you to increase your blog readership.


Interlink Your Older Posts:
Whenever you start writing your new post, always try to interlink your old posts by using proper hypertext. It will provide your reader more option to read their old posts. It not only helps you to decrease bounce rate but even it will increase your blog ranking in search engines. It is a well known SEO (Search Engine Optimization) technique. Wikipedia is the best example for this technique. Let me explain you by an example.


Interlinking is a best SEO practice:
In the above example word "SEO" is linked with our SEO page. This is called as interlinking or internal linking.


Provide Better Navigation:
Always try to use better navigation system in your blog. Your readers should be able to understand your site structure. Provide them all the ways so that they can read your blog content easily. Like add categories into blog sidebar, make a sitemap page show your blog archive etc.


Open External Links into New Tab:
Sometime you need to link to external sites to provide your readers more facilities. Like if you are running blogger template blog, then you surely need to add demo and download link into your blog posts. Always open that type of links into new tab. You can do this by using target="_blank" attribute to your hyperlinks like the below example.

<a href="Page URL" target="_blank">Anchor Text</a>


Show Post Summary
Use read more link on your blog Homepage and labels page to show only post summary. Don't show your full length post. If the post will be of reader's interest then he/she has to click the link to read full article. That means you beat your enemy means bounce rate. Cheers!

Source: http://www.bloggertipstricks.com/2013/02/reduce-blog-bounce-rate.html
================================================================

9 Tips To Reduce Blogger Blog Load Time

Loading time of a blog is really an important factor in SEO's prospective. Every webmaster should care for his/her blog's loading time because now a day's major search engines consider load time as a ranking factor specially Google. Beside this, visitors also don't like such sites that take too much time to load. They will surely press back button and go to another place for the information they are searching for. It will also increase bounce rate that affects your blog ranking. So, Today I decided to share my own way on how to reduce load time in Blogger Blog. Below are some very important tips that you can use to optimize your blog load speed.

Tips To Reduce Blog Load Time

1) Avoid JavaScript:
JavaScript makes page loading speed slow. Try to remove unnecessary JavaScript codes from your blog. Don't link to external sites for storing your JavaScript codes whilst you can save them inside your blogger blog. You can save your all JavaScript codes in your template above </head> by using the code below,

<script type='text/javascript'>
//<![CDATA[
Paste Your JavaScript Code Here//]]>
</script>

Just replace the bolded text with the JavaScript code you want to store in your template.


2) Widgets and Social Media Buttons:
Widgets and social media buttons are really very important but use of too many social media buttons and unnecessary widgets can slower your blog. Try to remove widgets that are not really necessary for your blog and use social media buttons wisely.


3) Give Proper Dimensions To Images:
Always give proper height and width to the images you are using in your posts, because it help the browsers to load the image quickly. To give height and width attribute to your images use below code.

<img width=”” height=”” src=”URL Of Image” /> 

4) Don’t Use An Image As A Background:
If your blog template have image as a background, then remove it. A background image is responsible for slow loading speed. Always use background colors not an image. To remove background image simply search below code and remove it from your template.

body { background: #B3B3B3 url(http://abc.com/background-image.jpg);

The code will surely be different in your template. To remove background image, search this type of code in your template and simply remove the text which looks like the bold text.


5) Reduce Advertisement On Blog:
Advertisement banners are coded with JavaScript. As I already told you in tip no 1 that too much use of JavaScript make blog loading speed slower. So please use limited advertisements on your blog.


6) Limited Number Of Posts At Home Page:
Try to show only 4-5 posts at your Homepage because if you use too many posts to show at Homepage then it will take more time to load. Fast loading Homepage keeps your visitors happy and they will stay at your blog for long time. If you don't know how to set number of posts on Homepage then see the screenshot below.
Blog-Load-Time


7) Always Use Quality Blogger Templates:
Most of newbie bloggers do this mistake. They upload any template that attracts them. There are lot of templates available on the internet but most of them are improper scripted. Using of those templates make job tough for search engine spiders to crawl your blog. You can use these Blogger Templates for your blog as they all are developed by a certified professional.


8) Use Read More Link To Summarize Blog Posts At Homepage:
Always summarize your blog post for Homepage by using a read more link. It has two benefits.
Improve Blog Load Time.
Increase Page views and Reduce Bounce Rate.


9) Eliminate External Links:
Eliminate all unnecessary external links from your blog. One common example for this is using stat counters. You can use Google Analytics instead of stat counter widgets to check your website traffic status.


How To Check Blog Load Speed?
There are lot of tools available on the internet which can be used to check your blog load time. Here I am listing two famous tools.

Google PageSpeed Insight
Pingdom Tools
Just open the link and type your blog address like this.

http://www.your-blog-address.blogspot.com/

Feedback!

These are 9 Tips that you can apply to improve your blog loading speed. If you are aware with more tips that can help to reduce load time in blogger blogs, then please share them with me through comments. Your help will be appreciated.

================================================================

Google’s Matt Cutts On Why Links Still Rule & How SEOs Go Wrong In Getting Them

Eric Enge has published an interview with Matt Cutts, Google’s head of search spam. The interview is similar to the format he published in 2010 with Cutts, but in this interview, the topic revolved mostly about link building and what is wrong with how SEOs do it today.
In short, Matt Cutts would love a world where link builders thought first about the content or web site and why that web site is worthy of a link, versus first being concerned about getting links. In the interview, the two discuss if link building is legal or illegal, if press releases should be used for link building, the problem with content syndication and guest blogging – plus much much more.
Here are some key takeaways from the interview, but make sure to read the full interview over here.
=> Link Building Is Not Bad: Just don’t try to get the link first, have compelling content people want to link to instead.
=> Press Releases Links: They still “probably not count” but your goal should not be the link but the exposure the press release gives you to editors who may read them and cover your story.
=> Content Syndication: If your content is being syndicated on other sites, give Google signals to know you are the original source. Make sure you publish well before others, possibly use rel=canonical, link to main source of content, and maybe use authorship.
=> Problem With Guest Posts: A large number of people are doing it the wrong way, guest posts have become more like article directories or article banks these days. Links: Links are still “the best way” to rank content.
This interview was conducted in person between Eric Enge and Matt Cutts while at SMX Advanced 2013 a few weeks ago.
Source: http://searchengineland.com/eric-enge-interviews-googles-matt-cutts-on-the-problem-with-link-building-today-166418?utm_campaign=wall&utm_source=socialflow&utm_medium=facebook













































































0 comments: