links for 2011-06-03
-
pushState + ajax = pjax
-
CIV in HTML5

Optimization is usually an afterthought or an after process in development; it usually only begins once a product is built, and only then if the site or application shows visible signs of needed optimization. I am vehemently against optimizing at the last minute, it should be part of your workflow(s), integrated into your development, on varying levels. One area of optimization that is critical for performance and seo is compacting and minimizing file sizes. Focusing on image file sizes in particular, when left unchecked, they can seriously impede a site’s download time, which affects that sites rankings in serps, amongst other things. There are a slew of methods to achieve this today (I use pngcrush), but as far as I know they are all focused on the images once they are production ready. While this standard practice is not going anywhere, check out these methods to be proactive with your image optimization before they get to production. These are only for Photoshop, however it is my image editor of choice when not developing in-browser.
Gamma Value of a computer monitor affects how light or dark an image is in a browser. Windows use gamma of 2.2, Mac os use 1.8, so images look darker on Windows typically. In Photoshop, we can preview how images will look on different os with different gamma values, and make adjustments to compensate. Gamma Values can be altered to increase cross-platform compatibility, to aid in the ux, and also for simple design aesthetics.
After selecting Save for Web & Devices, choose one of the options in the dialog box Preview pop-up menu:
The Photoshop Image Processor saves copies of a folder of images into jpeg format. The Photoshop Image Processor can be utilized for resizing and converting images’ color profile(s) to Web Standard srgb.
You can select the Save for Web & Devices Option under the File Menu, the same place where all of the Save options are located in Photoshop. Selecting this option shows further options for Image Optimization and previewing said images. Atop this dialog box are four tabs, original, optimized: displays image with current optimization settings applied, 2-Up: displays two versions of the image, and 4-up which displays four versions of the image.
Select preset optimization setting from the Preset Menu, or set individual options; available options change depending on the selected file format. In 4up Mode, if you choose Repopulate Views from the Optimize Menu, it automatically generates lower-quality versions of your image after changing the optimization settings.




You can change the optimization settings to your desired balance of image quality and file size; if using multiple slices, make sure you optimize all of them too. To insure that an optimized images colors are cross-browser compatible, make sure you convert any images with an embedded color profile other than srgb before you save for web. Convert to srgb option is default.
You can choose what xmp Metadata saves with your optimized file from the Metadata Menu. Metadata is only partially supported by gif/png formats and fully supported by jpeg. options:

Save your optimization settings as a named set and apply them to other images; they’ll appear in the Preset Menu.
Note: If you have an image with any embedded color profile other than srgb, convert the image’s colors to srgb before saving for the web.
You can preview optimized images in any browser that you have installed, simply click the browser icon at the bottom of the Save For Web & Devices dialog box. I had no browsers in the pop-up Menu, so I added some, for clarity’s sake. I selected Firefox and sure enough, an instance of Firefox fired up and displaying the image preview, as well as a caption listing the image’s file type, dimensions (px), size, compression specifications and any other settings it is using.


When saving for Web & Devices, select the tab with the desired display option, at the top of the dialog box. Then select Optimize To File Size from the Optimize Menu (right of the Preset Menu), enter the desired file size and select a Start With option. The Auto Select gif/jpeg Start With option automatically selects the optimal format, depending on the image’s content.
Photoshop also allows us to save files for email, simply click the Optimized Tab in the Save For Web & Devices dialog box and select jpeg Low from the Preset Menu. Select the chainlink icon to the right of the dimension size boxes to retain image proportions and then enter 400 pixels into the width. Adobe recommends this size, however I don’t and neither do i recommend optimizing for email, at least for my purposes. I’m sure there are good reasons to it, I’m just not aware of any.
The only reason I’ve ever been against an ajax driven site is because the content is not accessible to the bots, thus seriously limiting your seo and findability capabilities. Google has come up with a solution to make ajax driven sites crawlable, and by following their easy guide, my reasoning against ajax sites is completely misguided. Below is my interpretation of the guide, but you can check it out in its entirety here.
You need to let the bots know that you support ajax crawlability scheme, and you do this simply by adding a ! after the hash fragment. Simply adding ! after the # in your url completes your sites adoption of the scheme and your site is now considered ajax Crawlable.
_escaped_fragment_ Support to Your ServerYou need to provide html snapshots of your url so the bots can see your content. Essentially we want the bots to see the url www.example.com/ajax.html?_escaped_fragment_=key=value instead of what users see www.example.com/ajax.html#!key=value. We accomplish this by using 1 of 3 methods;
Regardless of your method, it needs to be tested using Fetch as Googlebot.
In order to make pages without hash fragments crawlable, simply add this meta element in the document’s head section. It tells the bots to crawl the ugly version of this url.
<meta name="fragment" content="!" />
Update your existing sitemap so that all of the crawlable urls that you want indexed are indicated there. See, I told ya’ll this was easy.

Setting up OpenID to work with your own domains is super easy. Add the following markup to your documents head.
<link rel="openid.server" href="http://www.myopenid.com/server" />
<link rel="openid.delegate" href="http://youraccount.myopenid.com/" />
<link rel="openid2.local_id" href="http://youraccount.myopenid.com" />
<link rel="openid2.provider" href="http://www.myopenid.com/server" />
<meta http-equiv="X-XRDS-Location" content="http://www.myopenid.com/xrds?username=youraccount.myopenid.com" />
That’s it….now you can log into OpenID enabled sites with myOpenID as your OpenID server.

The description, as entered by the person who uploaded it.
59446316@N08 The ID of the content owner – you can use this to link to their buddy icon. like so @STHLMwebmonkeys #08wm
jalbertbowdenii Your Flickr screen name.