Tagged with " internet"
When you’ve decided to build yourself a new site, whether it be due to needing an update, or if you’re just looking for a new image there’s a very important step to monitor. You need to ensure, that before you get too far into the process that you’re not making a rookie mistake and allowing the search engines to index both versions of your website. Doing so, can cause you grief and could ulimately penalize both websites for duplicate content.
When you’ve begun working on the newest version of your site, you need to ensure that it’s not being indexed by the search engines so you can work all you like without worry. The simplest way would be to use your htaccess file to block the bots, or alternatively if you have the means, you could work on a local server where the site isn’t techinically on the internet. Duplicate content can cause Google or Bing not to know which page it should list in response to a search. The search engines suddenly have two versions of your website and content to consider, and need to determine which it feels is the most relevant of the two. Seeing as your old site originally had the content, you stand to injure your brands reputation and new url simply by working on a new site or look.
Duplicate content isn’t just a concern when you’re working on your own website, it’s actually a point you should make note to occaisionally monitor. A bothersome trait and a difficult problem to tackle is if your own, original content ends up being scraped by a bot and winds up on an aggregator site. You can search for your own content by searching for key phrases and terms which you’ve used within the content and/or title, and hopefully the only sites which come up are your own or those you’ve given permission too to reproduce it. Typically scraper sites don’t rank that highly in search anymore, however there are still occasions where they do show up higher in the results than the original creators. When this happens, you often become trapped in a terrible cycle of trying to have your own, hard earned content removed from the index, and having credit given where credit is due.
There are a few basic rules and ideas that you should always keep in mind when working on the web. Sometimes, it doesn’t matter how often you’ve done the same steps before, you make a mistake. Depending on the severity you can take down a website, mess up a web page, or you could make just minor little code mistakes which mess up your page layout in the odd browser.
One of the most basic points to keep in mind while working on your website is to keep it simple. A lesser repeated, but just as important lesson is to always backup your work. No matter how basic or simple your steps may be, you should always keep a backup before you push your changes live. It’s a simple mistake to not keep a backup of your original site or content before getting to work on it, one which can cost you more work if you’re not careful. Even seasoned coders make mistakes, and when they happen, a blog for example *cough*, can be offline until a backup can be restored.
But enough about completely crashing a website or losing content and materials, there are small errors you can make which can actually hamper your site as well which aren’t as immediately obvious. If you’ve been rewriting your simple tags, say your title, description and keywords (yes I know, the internet says they don’t really matter anymore), and you happen to mix them up with the wrong content, you could experience a negative impact in your rankings. And even a loss of a single position in the search results can equate to lost conversions. Another common error, one which doesn’t directly impact your rankings and website performance and is a tad more difficult to detect, is mis-tagging elements on your pages. It may seem a small, and innocuous step to miss in a website or page, but every little thing does add up. And when it comes to optimization and your online competition, every little bit helps.
It should be no surprise to anyone out there that Google has their share of privacy concerns. People worry about their search history, their emails being read, even with some who use the browser the worry extends to their entire online activity profile. Everybody has always assumed that Google knew what you were doing and kept track of everything, and they never really helped their case by saying either way. But now, you can have an insight into just how much Google does know about you.
Earlier today, Google announced a new service they call Account Activity which does exactly as it’s name suggests. For users who opt in to the service, once a month Google will send you a report about the information it has collected on you for that month, while signed into your Google account. Being ever curious, I opted in for the report and a few hours later I received all of the data that has been grabbed of my activity. And bearing in mind that I also use an Android device, the amount of data that could be collected of my usage is quite large. Yet when I went through the report, I found the information was vague at best, at least in terms of what they keep. It kept track of the top 3 people I email, how many emails I had coming and going (note: not the content of which), and the devices and platforms I’ve used while signed in. Way down at the bottom of the report is Web History, and since I’d opted out of allowing them to collect any data, it was completely blank.
Since Google unified their privacy policies across their products, it seemed like there was a sudden surge of concern about what data Google does collect about their users. Personally, it was never a concern, becuase while true privacy online doesn’t exist, as a user you do still have an incredible amount of contol about what information you share with the world, and with the services out there. Where the disconnect between the reality and the paranoia occur, is where people stop reading about their services, and just run amok with what’s trending. Whether it be on Twitter, Facebook, or any other social media network. Every service on the internet, not just Google, every single one is only viable because of the users who share information with them. Even if it’s something as simple as a username, without even that fragment of information they couldn’t exist. When you next read about some internet company stealing your information or selling it to third parties, instead of jumping on the band wagon have a look at your settings if you’re a part of the network. It’s the user who has the control at the end of the day, if you don’t want to be a part of a service, leave it.
In all the ruckus made about the issues of privacy that people keep bringing up, it always comes back to the same question. If you’re so unhappy, why don’t you just stop using it? The real issue with privacy and being online that the vast majority don’t, or won’t realize, is it doesn’t truly exist. If you want your information to be private, never sign anything. Never use the internet, don’t get an email address and move to a mountain side. And even then, even if you lived all alone in a shack on the side of a mountain, if someone sees you and writes a blog about you, sorry, no more privacy. All you can do to maintain control online is to be aware of the sites you use, what their policies are and what they change too if they change. Google didn’t change anything about how they do their work, they simply stream lined it to make it easier for the user, and for them. Facebook, Apple, Microsoft, Yahoo, all massive companies all of which became that way because you’ve used their products and given them your information. Companies don’t grow like trees, they grow with your personal, private information.
There’s a new type of search engine making a debut on the web dubbed Trapit. It’s unique in it’s own right simply because of the premise it has been built on, by learning what it is that you search for it delivers similar results for you to look through.
It’s not an unheard of idea, or even really a unique one at that, Trapit however takes a step further and tries to make educated guesses as to your preferences. It’s the same kind of algorithm that Apple’s new Siri technology uses to deliver your answers to you as you ask for them. Trapit does specifically type cast itself as a discovery engine, not a search engine, that doesn’t preclude what they have deemed to be an upcoming competition with Google out of the picture. Trapit co-founder Gary Griffiths called Google an online yellow pages, saying that it works well for direct queries but not for getting to new content.
It’s an interesting idea and a different perspective on delivering search results to be sure. But it’s a rather curious thought that general users are in so far okay with the way Trapit works. The puzzlement is coming from remembering the public enjoys being able to have their privacy protected, as they should. And that there have been more than one concern or complaint registered in Google’s realm about privacy and about how your search terms are saved and or indexed as part of your search history. My question to the early adopters and testers of Trapit would be then: How do you expect that Trapit learns what you may enjoy? It saves your searches, either on a cookie on your computer or within their members database and extrapolates from their via it’s algoritm.
But then again, it seems that it’s alright for a little player out there to have access to your searches and (potentially) information, but not the big guys who are frequently held accountable. Perhaps it’s just another case of wanting to eat your cake and have it too.