Professional writing

>> Saturday, September 26, 2009

Professional Writers

A professional writer is someone who has been paid for work that they have written.

Professional writing is/as rhetorical

Professional writing is connected to the concept of rhetoric. Rhetoric focuses on informing or persuading an audience, and a successful professional writer is able to create interest in their audience. Moreover, this is combined with the aspects of the professional world which is typically done within a professional atmosphere, be it a workplace or as freelance work, created by someone who has knowledge and skills at writing and comprehends the wide range of requirements needed to successfully create the pieces being composed. Doing a search will yield many professional writers.
One of the main principles of rhetoric, when applied to the work of professional writers, is the art of effective communication and creating authoritative arguments.

Professional writing in other fields

Even if a student does not plan on writing as their career, they must still prepare for the inevitable writing for their career. Any professional field will require some form of writing, therefore a background in professional writing will never be wasted. Some instances where professional writing may be used in other career fields include the following examples:
- Case studies -Briefs - Client correspondence
Science and Engineering
- Lab reports - Journal articles - Technical reports - Experimental procedures and logs - Grant proposals
- Advertisement - Marketing analyses - Inventory reports - Damage reports
- Recording contracts - Project proposals - Reviews - Website authoring
Nearly all professions require written documents; in other words,all staff members produce written documentation of their work. Whether it be a fast food chain looking for additional web development, a law firm editing legal documents, or perhaps even a musical venue looking for someone to create flyers and posters on a regular basis; a professional writer can fit into any organization. Professional writers are specifically valuable in the workplace for their many talents including communication skills, creativity, technological proficiency, and other social skills.

Professional writing as compared to other majors

Professional writing, particularly as an undergraduate major, is most often confused with English and/or Journalism due to their similar skill groupings and classes.
English courses often include classes in professional writing and professional composition, emphasizing a clear and technical approach to writing. However, the majors begin to differ in that English has a larger focus on the reading and analysis of literature. Traditionally as well, writing within an English major revolves around the creation of essays and critiques, besides creative writing such as poetry and fiction.
Journalism, while retaining the conciseness that is characteristic to most professional writing documents, tends to produce short and fact-based articles rather than the more in-depth reports within professional writing.
Professional writers tend to have more specific and varied audiences with a focus more specific than facts alone. Professional writing involves advanced writing skills with an emphasis on writing in digital environments (e.g., web authoring, multimedia writing), evaluating rhetorical techniques to tailor writing to specific audiences, and requires proficiency in writing in a professional atmosphere such as the workplace for a company or professional organizations.


Content management

Content management, or CM, is a set of processes and technologies that support the evolutionary life cycle of digital information. This digital information is often referred to as content or, to be precise, digital content. Digital content may take the form of text, such as documents, multimedia files, such as audio or video files, or any other file type which follows a content lifecycle which requires management.
As of May 2009, the world's digital content is estimated at 487 billion gigabytes, the equivalent of a stack of books stretching from Earth to Pluto ten times.

The process of content management

Content management practices and goals vary with mission. News organizations, e-commerce websites, and educational institutions all use content management, but in different ways. This leads to differences in terminology and in the names and number of steps in the process. Typically, though, the digital content life cycle consists of 6 primary phases:
archive and;
For example, an instance of digital content is created by one or more authors. Over time that content may be edited. One or more individuals may provide some editorial oversight thereby approving the content for publication. Publishing may take many forms. Publishing may be the act of pushing content out to others, or simply granting digital access rights to certain content to a particular person or group of persons. Later that content may be superseded by another form of content and thus retired or removed from use.
Content management is an inherently collaborative process. It often consists of the following basic roles and responsibilities:
Creator - responsible for creating and editing content.
Editor - responsible for tuning the content message and the style of delivery, including translation and localization.
Publisher - responsible for releasing the content for use.
Administrator - responsible for managing access permissions to folders and files, usually accomplished by assigning access rights to user groups or roles. Admins may also assist and support users in various ways.
Consumer, viewer or guest- the person who reads or otherwise takes in content after it is published or shared.
A critical aspect of content management is the ability to manage versions of content as it evolves (see also version control). Authors and editors often need to restore older versions of edited products due to a process failure or an undesirable series of edits.
Another equally important aspect of content management involves the creation, maintenance, and application of review standards. Each member of the content creation and review process has a unique role and set of responsibilities in the development and/or publication of the content. Each review team member requires clear and concise review standards which must be maintained on an ongoing basis to ensure the long-term consistency and health of the knowledge base.
A content management system is a set of automated processes that may support the following features:
Import and creation of documents and multimedia material
Identification of all key users and their roles
The ability to assign roles and responsibilities to different instances of content categories or types.
Definition of workflow tasks often coupled with messaging so that content managers are alerted to changes in content.
The ability to track and manage multiple versions of a single instance of content.
The ability to publish the content to a repository to support access to the content. Increasingly, the repository is an inherent part of the system, and incorporates enterprise search and retrieval.
Content management systems take the following forms:
a web content management system is software for web site management - which is often what is implicitly meant by this term
the work of a newspaper editorial staff organization
a workflow for article publication
a document management system
a single source content management system - where content is stored in chunks within a relational database


Content management implementation must be able to manage content distributions and digital rights in content life cycle. Content management systems are usually involved with Digital Rights Management Systems to be able to control user access and digital right. In this step the read only structures of Digital Rights Management Systems force some limitations on Content Management implementations as they do not allow the protected contents to be changed in their life cycle. Creation of new contents using the managed(protected) ones is also another issue which will get the protected contents out of management controlling systems. There are a few Content Management implementations covering all these issues.


Content development (web)

Web content development is the process of researching, writing, gathering, organizing, and editing information for publication on web sites. Web site content may consist of prose, graphics, pictures, recordings, movies or other media assets that could be distributed by a hypertext transfer protocol server, and viewed by a web browser.

Content developers and web developers

When the World Wide Web began, web developers either generated content themselves, or took existing documents and coded them into hypertext markup language (HTML). In time, the field of web site development came to encompass many technologies, so it became difficult for web site developers to maintain so many different skills. Content developers are specialized web site developers who have mastered content generation skills. They can integrate content into new or existing web sites, but they may not have skills such as script language programming, database programming, graphic design and copywriting.
Content developers may also be search engine optimization specialists, or Internet marketing professionals. This is because content is called 'king'. High quality, unique content is what search engines are looking for and content development specialists therefore have a very important role to play in the search engine optimization process. One issue currently plaguing the world of web content development is keyword-stuffed content which are prepared solely for the purpose of manipulating a search engine. This is giving a bad name to genuine web content writing professionals. The effect is writing content designed to appeal to machines (algorithms) rather than people or community. Search engine optimization specialists commonly submit content to Article Directories to build their website's authority on any given topic. Most Article Directories allow visitors to republish submitted content with the agreement that all links are maintained. This has become a method of Search Engine Optimization for many websites today. If written according to SEO copywriting rules, the submitted content will bring benefits to the publisher (free SEO-friendly content for a webpage) as well as to the author (a hyperlink pointing to his/her website, placed on an SEO-friendly webpage).


Content designer

A content designer is a designer who designs content for media or software. The term is mainly used in web development. Depending on the content format, the content designer usually holds a more specific title such as graphic designer for graphical content, writer for textual content, instructional designer for educational content, or a programmer for automated program/data-driven content.

A senior content designer is a designer who leads a "content design" group in designing new content for a product. Depending on the purpose of the content, the role of a senior content designer may be similar or identical to a communication design, game development or educational role with a different title more associated with those professions. For example: a senior content designer in a communication design profession is better known as a creative director.


Content adaptation

Content Adaptation is the action of transforming content to adapt to device capabilities. Content adaptation is usually related to mobile devices that require special handling because of their limited computational power, small screen size and constrained keyboard functionality.
Content adaptation could roughly be divided to two fields: Media content adaptation that adapts media files and browsing content adaptation that adapts Web site to mobile devices.

Browsing Content Adaptation

Advances in the capabilities of small, mobile devices, such as mobile phones (cell phones) and Personal Digital Assistants has led to an explosion in the number of types of device that can now access the Web. Some commentators refer to the Web that can be accessed from mobile devices as the Mobile Web.
The sheer number and variety of Web-enabled devices poses significant challenges for authors of Web sites who want to support access from mobile devices. The W3C Device Independence Working Group described many of the issues in its report Authoring Challenges for Device Independence.
One approach to solving the problem is based around the concept of Content Adaptation. Rather than requiring authors to create pages explicitly for each type of device that might request them, content adaptation transforms an author's materials automatically.
For example, content might be converted from a device-independent markup language, such as XDIME, an implementation of the W3C's DIAL specification, into a form suitable for the device, such as XHTML Basic, C-HTML or WML. Similarly a suitable device-specific CSS style sheet or a set of in-line styles might be generated from abstract style definitions. Likewise a device specific layout might be generated from abstract layout definitions.
Once created, the device-specific materials form the response returned to the device from which the request was made.
Content adaptation requires a processor that performs the selection, modification and generation of materials to form the device-specific result. IBM's Websphere Everyplace Mobile Portal (WEMP), BEA Systems' WebLogic Mobility Server, Morfeo's MyMobileWeb and Apache Cocoon are examples of such processors.
Wurfl and WALL are popular Open Source tools for content adaptation. WURFL is an XML-based Device Description Repository with APIs to access the data in Java and PHP (and other popular programming languages). WALL (Wireless Abstraction Library) lets a developer author mobile pages that look like plain HTML, but converts them to WML, C-HTML and XHTML Mobile Profile depending on the capabilities of the device from which the HTTP request originates.
Alembik (Media Transcoding Server) is a Java (J2EE) application providing transcoding services for variety of clients and for different media types (image, audio, video, etc). It is fully compliant with OMA's Standard Transcoder Interface specification and is distributed under the LGPL open source license.
Launched in 2007, Bytemobile’s Web Fidelity Service was the first carrier-grade, commercial infrastructure solution to provide wireless content adaptation to mobile subscribers on their existing mass-market handsets, with no client download required.


Cloud networking

Cloud networking is the interconnection of components to "meet the networking requirements inherent in cloud computing". Cloud networking allows users to "tap a vast network of computers that can be accessed from long distance by a cell phone, laptop or mobile device for information or data".

Legal issues

U.S. Trademark 77,596,599 Arastra, Inc. (aka Arista) applied to the USPTO to trademark the descriptive and generic term on 20 October 2008 on a schedule 1(b) basis (intent to use) for Networking hardware and software to interconnect computers, servers and storage devices; computer software for use in controlling the operation and management of networks; computer software for use in connecting computer networks and systems, servers and storage devices; instructional manuals sold as a unit therewith despite extensive prior use of the term by other companies like Asankya ("the leader in Cloud networking services"), the existence of various solutions in the space already and generic use by the pressand bloggers.
This trademark status was listed as abandened 3 February 2009, perhaps due to a similar loss of Dell from their attempt at "Cloud Computing" being too generic.


Cloud Computing Manifesto

The Cloud Computing Manifesto is a manifesto containing a "public declaration of principles and intentions" for cloud computing providers and vendors, annotated as "a call to action for the worldwide cloud community" and "dedicated belief that the cloud should be open". It follows the earlier development of the Cloud Computing Bill of Rights which addresses similar issues from the users' point of view.
The document was developed "by way of an open community consensus process" in response to a request by Microsoft that "any 'manifesto' should be created, from its inception, through an open mechanism like a Wiki, for public debate and comment, all available through a Creative Commons license".Accordingly it is hosted on a MediaWiki wiki and licensed under the CC-BY-SA 3.0 license.
The original, controversial version of the document called the Open Cloud Manifesto was sharply criticised by Microsoft who "spoke out vehemently against it" for being developed in secret by a "shadowy group of IT industry companies", raising questions about conflicts of interest and resulting in extensive media coverage over the following days. A pre-announcement commits to the official publication of this document on March 30, 2009 (in spite of calls to publish it earlier), at which time the identities of the signatories ("several of the largest technology companies and organizations" led by IBM along with OMGand believed also to include Cisco, HP, and Sun Microsystems) is said to be revealed. Amazon, Google, Microsoft and are among those known to have rejected the document by declining to be signatories. The document was leaked by Geva Perry in a blog post on 27 March 2009and confirmed to be authentic shortly afterwards.
The authors of both public and private documents have agreed to "work to bring together the best points of each effort".
The Open Cloud Manifesto version, developed in private by a secret consortium of companies, was prematurely revealed by Microsoft's Senior Director of Developer Platform Product Management, Steve Martin on 26 March 2009. They claim that they were "privately shown a copy of the document, warned that it was a secret, and told that it must be signed 'as is,' without modifications or additional input", a point which is disputed by Reuven Cohen (originally believed to be the document's author). Some commentators found it ironic that Microsoft should speak out in support of open standardswhile others felt that their criticism was justified, comparing it to the "long, ugly war over WS-I".The call for open cloud standards was later echoed by Brandon Watson, Microsoft's Director of Cloud Services Ecosystem.
The following principles are defined by the document:
1. User centric systems enrich the lives of individuals, education, communication, collaboration, business, entertainment and society as a whole; the end user is the primary stakeholder in cloud computing.
2. Philanthropic initiatives can greatly increase the well-being of mankind; they should be enabled or enhanced by cloud computing where possible.
3. Openness of standards, systems and software empowers and protects users; existing standards should be adopted where possible for the benefit of all stakeholders.
4. Transparency fosters trust and accountability; decisions should be open to public collaboration and scrutiny and never be made "behind closed doors".
5. Interoperability ensures effectiveness of cloud computing as a public resource; systems must be interoperable over a minimal set of community defined standards and vendor lock-in must be avoided.
6. Representation of all stakeholders is essential; interoperability and standards efforts should not be dominated by vendor(s).
7. Discrimination against any party for any reason is unacceptable; barriers to entry must be minimised.
8. Evolution is an ongoing process in an immature market; standards may take some time to develop and coalesce but activities should be coordinated and collaborative.
9. Balance of commercial and consumer interests is paramount; if in doubt consumer interests prevail.
10. Security is fundamental, not optional.


Cloud computing

Cloud computing is a style of computing in which dynamically scalable and often virtualized resources are provided as a service over the Internet. Users need not have knowledge of, expertise in, or control over the technology infrastructure in the "cloud" that supports them.
The concept generally incorporates combinations of the following:
infrastructure as a service (IaaS)
platform as a service (PaaS)
software as a service (SaaS)
Other recent (ca. 2007–09) technologies that rely on the Internet to satisfy the computing needs of users. Cloud computing services often provide common business applications online that are accessed from a web browser, while the software and data are stored on the servers.
The term cloud is used as a metaphor for the Internet, based on how the Internet is depicted in computer network diagrams and is an abstraction for the complex infrastructure it conceals.
The first academic use of this term appears to be by Prof. Ramnath K. Chellappa (currently at Goizueta Business School, Emory University) who originally defined it as a computing paradigm where the boundaries of computing will be determined by economic rationale rather than technical limits.


Cloud computing customers do not generally own the physical infrastructure serving as host to the software platform in question. Instead, they avoid capital expenditure by renting usage from a third-party provider. They consume resources as a service and pay only for resources that they use. Many cloud-computing offerings employ the utility computing model, which is analogous to how traditional utility services (such as electricity) are consumed, while others bill on a subscription basis. Sharing "perishable and intangible" computing power among multiple tenants can improve utilization rates, as servers are not unnecessarily left idle (which can reduce costs significantly while increasing the speed of application development). A side effect of this approach is that overall computer usage rises dramatically, as customers do not have to engineer for peak load limits. Additionally, "increased high-speed bandwidth" makes it possible to receive the same response times from centralized infrastructure at other sites.


Cloud computing users can avoid capital expenditure (CapEx) on hardware, software, and services when they pay a provider only for what they use. Consumption is usually billed on a utility (e.g. resources consumed, like electricity) or subscription (e.g. time based, like a newspaper) basis with little or no upfront cost. A few cloud providers are now beginning to offer the service for a flat monthly fee as opposed to on a utility billing basis. Other benefits of this time sharing style approach are low barriers to entry, shared infrastructure and costs, low management overhead, and immediate access to a broad range of applications. Users can generally terminate the contract at any time (thereby avoiding return on investment risk and uncertainty) and the services are often covered by service level agreements (SLAs) with financial penalties.
According to Nicholas Carr, the strategic importance of information technology is diminishing as it becomes standardized and less expensive. He argues that the cloud computing paradigm shift is similar to the displacement of electricity generators by electricity grids early in the 20th century.
Although companies might be able to save on upfront capital expenditures, they might not save much and might actually pay more for operating expenses. In situations where the capital expense would be relatively small, or where the organization has more flexibility in their capital budget than their operating budget, the cloud model might not make great fiscal sense. Other factors impacting the scale of any potential cost savings include the efficiency of a company’s data center as compared to the cloud vendor’s, the company’s existing operating costs, the level of adoption of cloud computing, and the type of functionality being hosted in the cloud.


Vmware, Sun Microsystems, Rackspace US, IBM, Amazon, Google, BMC, Microsoft, and Yahoo are some of the major cloud computing service providers. Cloud services are also being adopted by individual users through large enterprises including Vmware, General Electric, and Procter & Gamble
As of 2009, new players, such as Ubuntu Cloud Computing, are gaining attention in the industry
The majority of cloud computing infrastructure, as of 2009, consists of reliable services delivered through data centers and built on servers with different levels of virtualization technologies. The services are accessible anywhere that provides access to networking infrastructure. Clouds often appear as single points of access for all consumers' computing needs. Commercial offerings are generally expected to meet quality of service (QoS) requirements of customers and typically offer SLAs. Open standards are critical to the growth of cloud computing, and open source software has provided the foundation for many cloud computing implementations.

Criticism and Disadvantages of Cloud Computing

Because cloud computing does not allow users to physically possess the storage of their data (the exception being the possibility that data can be backed up to a user-owned storage device, such as a USB flash drive or hard disk) it does leave responsibility of data storage and control in the hands of the provider.
Cloud computing has been criticized for limiting the freedom of users and making them dependent on the cloud computing provider, and some critics have alleged that is only possible to use applications or services that the provider is willing to offer. Thus, The London Times compares cloud computing to centralized systems of the 1950s and 60s, by which users connected through "dumb" terminals to mainframe computers. Typically, users had no freedom to install new applications and needed approval from administrators to achieve certain tasks. Overall, it limited both freedom and creativity. The Times argues that cloud computing is a regression to that time.
Similarly, Richard Stallman, founder of the Free Software Foundation, believes that cloud computing endangers liberties because users sacrifice their privacy and personal data to a third party. He stated that cloud computing is "simply a trap aimed at forcing more people to buy into locked, proprietary systems that would cost them more and more over time."
Further to Stallman's observation, It would be a challenge for hosting/deploying intranet and access restricted (for Govt., defense, institutional, etc) sites and their maintenance. Commercial sites using tools such as web analytics may not be able to capture right data for their business planning etc.

Risk mitigation

Corporations or end-users wishing to avoid not being able to access their data — or even losing it — are typically advised to research vendors' policies on data security before using their services. One technology analyst and consulting firm, Gartner, lists several security issues that one should discuss with cloud-computing vendors:
Privileged user access—Who has specialized access to data and about the hiring and management of such administrators?
Regulatory compliance—Is the vendor willing to undergo external audits and/or security certifications?
Data location—Does the provider allow for any control over the location of data?
Data segregation—Is encryption available at all stages, and were these encryption schemes designed and tested by experienced professionals?
Recovery—What happens to data in the case of a disaster, and does the vendor offer complete restoration, and, if so, how long does that process take?
Investigative Support—Does the vendor have the ability to investigate any inappropriate or illegal activity?
Long-term viability—What happens to data if the company goes out of business, and is data returned and in what format?
Data availability—Can the vendor move your data onto a different environment should the existing environment become compromised or unavailable?
In practice, one can best determine data-recovery capabilities by experiment; for example, by asking to get back old data, seeing how long it takes, and verifying that the checksums match the original data. Determining data security can be more difficult, but one approach is to encrypt the data yourself. If you encrypt data using a trusted algorithm, then, regardless of the service provider's security and encryption policies, the data will only be accessible with the decryption keys. This leads, however, to the problem of managing private keys in a pay-on-demand computing infrastructure.

Key characteristics

Agility improves with users able to rapidly and inexpensively re-provision technological infrastructure resources. The cost of overall computing is unchanged, however, and the providers will merely absorb up-front costs and spread costs over a longer period.
Cost is claimed to be greatly reduced and capital expenditure is converted to operational expenditure. This ostensibly lowers barriers to entry, as infrastructure is typically provided by a third-party and does not need to be purchased for one-time or infrequent intensive computing tasks. Pricing on a utility computing basis is fine-grained with usage-based options and fewer IT skills are required for implementation (in-house). Some would argue that given the low cost of computing resources, that the IT burden merely shifts the cost from in-house to outsourced providers. Furthermore, any cost reduction benefit must be weighed against a corresponding loss of control, access and security risks.
Device and location independenceenable users to access systems using a web browser regardless of their location or what device they are using (e.g., PC, mobile). As infrastructure is off-site (typically provided by a third-party) and accessed via the Internet, users can connect from anywhere.
Multi-tenancy enables sharing of resources and costs across a large pool of users thus allowing for:
Centralization of infrastructure in locations with lower costs (such as real estate, electricity, etc.)
Peak-load capacity increases (users need not engineer for highest possible load-levels)
Utilization and efficiency improvements for systems that are often only 10–20% utilized.
Reliability improves through the use of multiple redundant sites, which makes cloud computing suitable for business continuity and disaster recovery. Nonetheless, many major cloud computing services have suffered outages, and IT and business managers can at times do little when they are affected.
Scalability via dynamic ("on-demand") provisioning of resources on a fine-grained, self-service basis near real-time, without users having to engineer for peak loads. Performance is monitored, and consistent and loosely-coupled architectures are constructed using web services as the system interface.
Security typically improves due to centralization of data, increased security-focused resources, etc., but concerns can persist about loss of control over certain sensitive data. Security is often as good as or better than under traditional systems, in part because providers are able to devote resources to solving security issues that many customers cannot afford. Providers typically log accesses, but accessing the audit logs themselves can be difficult or impossible. Ownership, control and access to data controlled by "cloud" providers may be made more difficult, just as it is sometimes difficult to gain access to "live" support with current utilities. Under the cloud paradigm, management of sensitive data is placed in the hands of cloud providers and third parties.
Sustainability comes about through improved resource utilization, more efficient systems, and carbon neutrality. Nonetheless, computers and associated infrastructure are major consumers of energy. A given (server-based) computing task will use X amount of energy whether it is on-site, or off.


A cloud application leverages the Cloud in software architecture, often eliminating the need to install and run the application on the customer's own computer, thus alleviating the burden of software maintenance, ongoing operation, and support. For example:
Peer-to-peer / volunteer computing (Bittorrent, BOINC Projects, Skype)
Web application (Facebook)
Software as a service (Google Apps, SAP and Salesforce)
Software plus services (Microsoft Online Services)


Cloud infrastructure, such as Infrastructure as a service, is the delivery of computer infrastructure, typically a platform virtualization environment, as a service. For example:
Full virtualization (GoGrid, Skytap, iland)
Grid computing (Sun Cloud)
Management (RightScale)
Compute (Amazon Elastic Compute Cloud)
Platform (
Storage (Amazon S3, Nirvanix, Rackspace)


A cloud platform, such as Platform as a service, the delivery of a computing platform, and/or solution stack as a service, facilitates deployment of applications without the cost and complexity of buying and managing the underlying hardware and software layers.[58] For example:
Code Based Web Application Frameworks
Java Google Web Toolkit (Google App Engine)
Python Django (Google App Engine)
Ruby on Rails (Heroku)
.NET (Azure Services Platform)
Non-Code Based Web Application Framework
Cloud Computing Application & Web Hosting (Rackspace Cloud)
Proprietary (


A user is a consumer of cloud computing. The privacy of users in cloud computing has become of increasing concern. The rights of users is also an issue, which is being addressed via a community effort to create a bill of rights. The Franklin Street statement was drafted with an eye towards protecting users' freedoms.



>> Monday, August 3, 2009

The Extensible Hypertext Markup Language, or XHTML, is a markup language that has the same depth of expression as HTML, but also conforms to XML syntax.
While HTML prior to HTML 5 was defined as an application of Standard Generalized Markup Language (SGML), a very flexible markup language, XHTML is an application of XML, a more restrictive subset of SGML. Because they need to be well-formed, true XHTML documents allow for automated processing to be performed using standard XML tools—unlike HTML, which requires a relatively complex, lenient, and generally custom parser. XHTML can be thought of as the intersection of HTML and XML in many respects, since it is a reformulation of HTML in XML. XHTML 1.0 became a World Wide Web Consortium (W3C) Recommendation on January 26, 2000. XHTML 1.1 became a W3C Recommendation on May 31, 2001.
XHTML is "a reformulation of the three HTML 4 document types as applications of XML 1.0".The W3C also continues to maintain the HTML 4.01 Recommendation and the specifications for HTML 5 and XHTML 5 are being actively developed. In the current XHTML 1.0 Recommendation document, as published and revised to August 2002, the W3C comments that, "The XHTML family is the next step in the evolution of the Internet. By migrating to XHTML today, content developers can enter the XML world with all of its attendant benefits, while still remaining confident in their content's backward and future compatibility."
In the late 1990s, many considered that the future of HTML lay in the creation of a version adhering to the syntax rules of XML. The then current version of HTML, HTML 4, was ostensibly an application of Standard Generalized Markup Language (SGML); however the specification for SGML was complex, and neither web browsers nor the HTML 4 Recommendation were fully conformant with it. By shifting the underlying base from SGML to the simpler XML, HTML would become compatible with common XML tools; servers and proxies would be able to transform content, as necessary, for constrained devices such as mobile phones.
Another key advantage was extensibility. By utilising namespaces, XHTML documents could include fragments from other XML-based languages such as Scalable Vector Graphics and MathML. Finally, the renewed work would provide an opportunity to divide HTML into reusable components (XHTML Modularization) and clean up untidy parts of the language.
Relationship to HTML
HTML is the antecedent technology to XHTML. The changes from HTML to first-generation XHTML 1.0 are minor and are mainly to achieve conformance with XML. The most important change is the requirement that the document must be well-formed and that all elements must be explicitly closed as required in XML. In XML, all element and attribute names are case-sensitive, so the XHTML approach has been to define all tag names to be lowercase.
This contrasts with some earlier established traditions that began around the time of HTML 2.0, when many used uppercase tags. In XHTML, all attribute values must be enclosed by quotes; either single (') or double (") quotes may be used. In contrast, this was sometimes optional in SGML-based HTML, where attributes can omit quotes in certain cases. All elements must also be explicitly closed, including empty (aka singleton) elements such as img and br. This can be done by adding a closing slash to the start tag, e. g., and
. Attribute minimization (e. g.,


Template (programming)

Templates are a feature of the C++ programming language that allow functions and classes to operate with generic types. This allows a function or class to work on many different data types without being rewritten for each one.
Templates are of great utility to programmers in C++, especially when combined with multiple inheritance and operator overloading. The C++ Standard Library provides many useful functions within a framework of connected templates.
Technical overview
There are two kinds of templates: function templates and class templates.
Function templates
A function template behaves like a function that can accept arguments of many different types. In other words, a function template represents a family of functions.
Class Templates
Simple Class templates
A function template provides a specification for generating template functions, based on some parameters, which all share the same name and are treated as a unit (meaning that, for instance, the programmer just calls max with some arguments, and the appropriate instance of the template materializes).
Similarly, a class template provides a specification for generating classes based on parameters. The previous section shows an advance use of class templates to perform compile-type computation on types. A more common use for class templates, is the definition of polymorphic classes, such as containers.
For example, the C++ standard library has a list container called list, which is a template. The statement list designates or instantiates a linked-list of type int. The statement list designates or instantiates a linked-list of type string. The template has some additional parameters, which take default values if they are not specified. For example, the programmer can write a custom class that provides memory allocation services, and that class can be specified as an argument to the list template, to instantiate a list container that is tightly coupled to this custom allocator (at compile time).
A class template usually defines a set of generic functions that operate on the type specified for each instance of the class (i.e., the parameter between the angle brackets, as shown above). The compiler will generate the appropriate function code at compile-time for the parameter type that appears between the brackets.
Template specialization
The programmer may decide to implement a special version of a function (or class) for a certain type which is called template specialization. If a class template is specialized by a subset of its parameters it is called partial template specialization. If all of the parameters are specialized it is an explicit specialization or full specialization. Function templates cannot be partially specialized.
Specialization is used when the behavior of a function or class for particular choices of the template parameters must deviate from the generic behavior: that is, from the code generated by the main template, or templates.
For example, consider the max function again. Suppose that the programmer has a class for representing mathematical vectors called vec. This vector class has a member function called norm which returns the length of a vector. The programmer wants to be able to use the max template function over two vec objects, with the semantics that it returns the vector which has the greater norm of the two.
The regular max template does not necessarily work. It wants to compare objects using the greater-than operator:
typename promote::type max(const L &left, const R &right)
// requires "::operator > (L, R)" or "L::operator > (R)"
return left > right ? left : right;
One obvious way to solve the problem is to make the greater-than operator work for vec objects using their norm, so that this template is then applicable. However, here is how it can be solved with a template specialization for max:
typename promote::type max(const vec &left, const vec &right)
return left.norm() > right.norm() ? left : right;
The template specialization provides custom behavior for this combination of types; the norm member function is called on both vectors to retrieve their lengths, and it is their lengths (assumed to be some scalar numeric type) which are then compared with the greater-than operator.
Also, the previous section Class Templates demonstrates a use of template specialization over class templates to write the base rules for type promotion for two-valued arithmetic operation.
Advantages and disadvantages
Some uses of templates, such as the maximum() function, were previously fulfilled by function-like preprocessor macros. For example, the following is a C++ maximum() macro:
#define maximum(a,b) ((a) < (b) ? (b) : (a))
Both macros and templates are expanded at compile-time. Macros are always expanded inline, whereas templates are only expanded inline when the compiler deems it appropriate. When expanded inline, macro functions and template functions have no extraneous run-time overhead. However, template functions will have run-time overhead when they are not expanded inline.
Templates are considered "type-safe", that is, they require type-checking at compile-time. Hence, the compiler can determine at compile-time whether or not the type associated with a template definition can perform all of the functions required by that template definition.
By design, templates can be utilized in very complex problem spaces, whereas macros are substantially more limited.
There are fundamental drawbacks to the use of templates:
Historically, some compilers exhibited poor support for templates. So, the use of templates could decrease code portability.
Many compilers lack clear instructions when they detect a template definition error. This can increase the effort of developing templates, and has prompted the inclusion of Concepts in the next C++ standard.
Since the compiler generates additional code for each template type, indiscriminate use of templates can lead to code bloat, resulting in larger executables.
Because a template by its nature exposes its implementation, injudicious use in large systems can lead to longer build times.
Additionally, the use of the "less-than" and "greater-than" signs as delimiters is problematic for tools (such as text editors) which analyse source code syntactically. It is difficult, or maybe impossible, for such tools to determine whether a use of these tokens is as comparison operators or template delimiters. For example, this line of code:
foo (a < b, c > d) ;
may be a function call with two integer parameters, each a comparison expression. Alternatively, it could be a declaration of a constructor for class foo taking one parameter, "d", whose type is the parametrised "a < b, c >".
Generic programming features in other languages
Initially, the concept of templates was not included in some languages, such as Java and C# 1.0. Java's adoption of generics mimics the behavior of templates, but is technically different. C# added generics (parameterized types) in .NET 2.0. The generics in Ada predate C++ templates.
Although C++ templates, Java generics, and .NET generics are often considered similar, generics only mimic the basic behavior of C++ templates. Some of the advanced template features utilized by libraries such as Boost and STLSoft, and implementations of the STL itself, for template metaprogramming (explicit or partial specialization, default template arguments, template non-type arguments, template template arguments, ...) are not available with generics.
The D programming language attempts to build on C++ by creating an even more powerful template system. A significant addition is the inclusion of the static if statement, which allows conditional compilation of code based on any information known at compile time.
This function will work for any number of arguments, with the foreach iteration over the tuple of arguments expanded at compile time.
In C++ templates, the compile-time cases are performed by pattern matching over the template arguments, so the Factorial template's base cases are implemented by matching 0 and 1 rather than with an inequality test, which is unavailable:



JHTML stands for Java HTML. This is a page authoring system developed at Art Technology Group (ATG). Files with a ".jhtml" filename extension contain standard HTML tags in addition to proprietary tags that reference Java objects running on a special server setup to handle requests for pages of this sort.
When a request is made for a JHTML page, e.g. "index.jhtml", the request for this page is forwarded from the HTTP server to another system running a Java application server. The JHTML page is compiled first into a .java file and then into a Java .class file. The application server runs the code in the .class file as a servlet whose sole function is to emit a stream of standard HTTP and HTML data back to the HTTP server and on back to the client software (the web browser, usually) that originally requested the document. The principal benefit of this system is that it allows logic running in Java on the application server to generate the HTML dynamically. Often a database is queried to accumulate the specific data needed in the page.
The system is derivative of earlier forms of CGI programming that allow a program running on a web server to generate HTML dynamically. With JHTML, you can author standard HTML and just insert a few extra tags that represent the pieces of the HTML page data that Java should be used to create. JHTML is a proprietary technology of ATG. Sun Microsystems licensed parts of this technology and developed the JSP system from the ATG page compilation system. Even though many popular sites are still using JHTML, the JSP standard has largely superseded it.


Dynamic HTML

Dynamic HTML, or DHTML, is a collection of technologies used together to create interactive and animated web sites by using a combination of a static markup language (such as HTML), a client-side scripting language (such as JavaScript), a presentation definition language (such as CSS), and the Document Object Model.
DHTML allows scripting languages to change variables in a web page's definition language, which in turn affects the look and function of otherwise "static" HTML page content, after the page has been fully loaded and during the viewing process. Thus the dynamic characteristic of DHTML is the way it functions while a page is viewed, not in its ability to generate a unique page with each page load.
By contrast, a dynamic web page is a broader concept — any web page generated differently for each user, load occurrence, or specific variable values. This includes pages created by client-side scripting, and ones created by server-side scripting (such as PHP or Perl) where the web server generates content before sending it to the client.
DHTML is often used to make rollover buttons or drop-down menus on a web page and interactive web pages.
A less common use is to create browser-based action games. During the late 1990s and early 2000s, a number of games were created using DHTML, such as Kingdom of Loathing, but differences between browsers made this difficult: many techniques had to be implemented in code to enable the games to work on multiple platforms. Recently browsers have been converging towards the web standards, which has made the design of DHTML games more viable. Those games can be played on all major browsers and they can also be ported to Widgets for Mac OS X and Gadgets for Windows Vista, which are based on DHTML code.
The term has fallen out of use in recent years, as DHTML scripts often tended to not work well between various web browsers. DHTML may now be referred to as unobtrusive JavaScript coding (DOM Scripting), in an effort to place an emphasis on agreed-upon best practices while allowing similar effects in an accessible, standards-compliant way.
Some disadvantages of DHTML are that it is difficult to develop and debug due to varying degrees of support among web browsers of the technologies involved, and that the variety of screen sizes means the end look can only be fine-tuned on a limited number of browser and screen-size combinations. Development for relatively recent browsers, such as Internet Explorer 5.0+, Mozilla Firefox 2.0+, and Opera 7.0+, is aided by a shared Document Object Model. Basic DHTML support was introduced with Internet Explorer 4.0, although there was a basic dynamic system with Netscape Navigator 4.0.


Search engine optimization

>> Monday, June 1, 2009

Search engine optimization (SEO) is the process of improving the volume or quality of traffic to a web site from search engines via "natural" ("organic" or "algorithmic") search results. Typically, the earlier a site appears in the search results list, the more visitors it will receive from the search engine. SEO may target different kinds of search, including image search, local search, and industry-specific vertical search engines. This gives a web site web presence.
As an Internet marketing strategy, SEO considers how search engines work and what people search for. Optimizing a website primarily involves editing its content and HTML coding to both increase its relevance to specific keywords and to remove barriers to the indexing activities of search engines.
The acronym "SEO" can also refer to "search engine optimizers," a term adopted by an industry of consultants who carry out optimization projects on behalf of clients, and by employees who perform SEO services in-house. Search engine optimizers may offer SEO as a stand-alone service or as a part of a broader marketing campaign. Because effective SEO may require changes to the HTML source code of a site, SEO tactics may be incorporated into web site development and design. The term "search engine friendly" may be used to describe web site designs, menus, content management systems and shopping carts that are easy to optimize.
Another class of techniques, known as black hat SEO or Spamdexing, use methods such as link farms and keyword stuffing that degrade both the relevance of search results and the user-experience of search engines. Search engines look for sites that employ these techniques in order to remove them from their indices.
Webmasters and content providers began optimizing sites for search engines in the mid-1990s, as the first search engines were cataloging the early Web. Initially, all a webmaster needed to do was submit a page, or URL, to the various engines which would send a spider to "crawl" that page, extract links to other pages from it, and return information found on the page to be indexed. The process involves a search engine spider downloading a page and storing it on the search engine's own server, where a second program, known as an indexer, extracts various information about the page, such as the words it contains and where these are located, as well as any weight for specific words, as well as any and all links the page contains, which are then placed into a scheduler for crawling at a later date.
Site owners started to recognize the value of having their sites highly ranked and visible in search engine results, creating an opportunity for both white hat and black hat SEO practitioners. According to industry analyst Danny Sullivan, the phrase search engine optimization probably came into use in 1997.
Early versions of search algorithms relied on webmaster-provided information such as the keyword meta tag, or index files in engines like ALIWEB. Meta tags provide a guide to each page's content. But using meta data to index pages was found to be less than reliable because the webmaster's choice of keywords in the meta tag could potentially be an inaccurate representation of the site's actual content. Inaccurate, incomplete, and inconsistent data in meta tags could and did cause pages to rank for irrelevant searches. Web content providers also manipulated a number of attributes within the HTML source of a page in an attempt to rank well in search engines.
By relying so much on factors such as keyword density which were exclusively within a webmaster's control, early search engines suffered from abuse and ranking manipulation. To provide better results to their users, search engines had to adapt to ensure their results pages showed the most relevant search results, rather than unrelated pages stuffed with numerous keywords by unscrupulous webmasters. Since the success and popularity of a search engine is determined by its ability to produce the most relevant results to any given search, allowing those results to be false would turn users to find other search sources. Search engines responded by developing more complex ranking algorithms, taking into account additional factors that were more difficult for webmasters to manipulate.
Graduate students at Stanford University, Larry Page and Sergey Brin developed "backrub," a search engine that relied on a mathematical algorithm to rate the prominence of web pages. The number calculated by the algorithm, PageRank, is a function of the quantity and strength of inbound links. PageRank estimates the likelihood that a given page will be reached by a web user who randomly surfs the web, and follows links from one page to another. In effect, this means that some links are stronger than others, as a higher PageRank page is more likely to be reached by the random surfer.
Page and Brin founded Google in 1998. Google attracted a loyal following among the growing number of Internet users, who liked its simple design. Off-page factors (such as PageRank and hyperlink analysis) were considered as well as on-page factors (such as keyword frequency, meta tags, headings, links and site structure) to enable Google to avoid the kind of manipulation seen in search engines that only considered on-page factors for their rankings. Although PageRank was more difficult to game, webmasters had already developed link building tools and schemes to influence the Inktomi search engine, and these methods proved similarly applicable to gaming PageRank. Many sites focused on exchanging, buying, and selling links, often on a massive scale. Some of these schemes, or link farms, involved the creation of thousands of sites for the sole purpose of link spamming. In recent years major search engines have begun to rely more heavily on off-web factors such as the age, sex, location, and search history of people conducting searches in order to further refine results.
By 2007, search engines had incorporated a wide range of undisclosed factors in their ranking algorithms to reduce the impact of link manipulation. Google says it ranks sites using more than 200 different signals. The three leading search engines, Google, Yahoo and Microsoft's Live Search, do not disclose the algorithms they use to rank pages. Notable SEOs, such as Rand Fishkin, Barry Schwartz, Aaron Wall and Jill Whalen, have studied different approaches to search engine optimization, and have published their opinions in online forums and blogs. SEO practitioners may also study patents held by various search engines to gain insight into the algorithms.
Relationship with search engines
By 1997 search engines recognized that webmasters were making efforts to rank well in their search engines, and that some webmasters were even manipulating their rankings in search results by stuffing pages with excessive or irrelevant keywords. Early search engines, such as Infoseek, adjusted their algorithms in an effort to prevent webmasters from manipulating rankings.
Due to the high marketing value of targeted search results, there is potential for an adversarial relationship between search engines and SEOs. In 2005, an annual conference, AIRWeb, Adversarial Information Retrieval on the Web, was created to discuss and minimize the damaging effects of aggressive web content providers.
SEO companies that employ overly aggressive techniques can get their client websites banned from the search results. In 2005, the Wall Street Journal reported on a company, Traffic Power, which allegedly used high-risk techniques and failed to disclose those risks to its clients. Wired magazine reported that the same company sued blogger and SEO Aaron Wall for writing about the ban. Google's Matt Cutts later confirmed that Google did in fact ban Traffic Power and some of its clients.
Some search engines have also reached out to the SEO industry, and are frequent sponsors and guests at SEO conferences, chats, and seminars. In fact, with the advent of paid inclusion, some search engines now have a vested interest in the health of the optimization community. Major search engines provide information and guidelines to help with site optimization. Google has a Sitemaps program to help webmasters learn if Google is having any problems indexing their website and also provides data on Google traffic to the website. Google guidelines are a list of suggested practices Google has provided as guidance to webmasters. Yahoo! Site Explorer provides a way for webmasters to submit URLs, determine how many pages are in the Yahoo! index and view link information.
Getting indexed
The leading search engines, Google, Yahoo! and Microsoft, use crawlers to find pages for their algorithmic search results. Pages that are linked from other search engine indexed pages do not need to be submitted because they are found automatically. Some search engines, notably Yahoo!, operate a paid submission service that guarantee crawling for either a set fee or cost per click. Such programs usually guarantee inclusion in the database, but do not guarantee specific ranking within the search results. Yahoo's paid inclusion program has drawn criticism from advertisers and competitors. Two major directories, the Yahoo Directory and the Open Directory Project both require manual submission and human editorial review. Google offers Google Webmaster Tools, for which an XML Sitemap feed can be created and submitted for free to ensure that all pages are found, especially pages that aren't discoverable by automatically following links.
Search engine crawlers may look at a number of different factors when crawling a site. Not every page is indexed by the search engines. Distance of pages from the root directory of a site may also be a factor in whether or not pages get crawled.
Preventing crawling
To avoid undesirable content in the search indexes, webmasters can instruct spiders not to crawl certain files or directories through the standard robots.txt file in the root directory of the domain. Additionally, a page can be explicitly excluded from a search engine's database by using a meta tag specific to robots. When a search engine visits a site, the robots.txt located in the root directory is the first file crawled. The robots.txt file is then parsed, and will instruct the robot as to which pages are not to be crawled. As a search engine crawler may keep a cached copy of this file, it may on occasion crawl pages a webmaster does not wish crawled. Pages typically prevented from being crawled include login specific pages such as shopping carts and user-specific content such as search results from internal searches. In March 2007, Google warned webmasters that they should prevent indexing of internal search results because those pages are considered search spam.
White hat versus black hat
SEO techniques can be classified into two broad categories: techniques that search engines recommend as part of good design, and those techniques of which search engines do not approve. The search engines attempt to minimize the effect of the latter, among them spamdexing. Some industry commentators have classified these methods, and the practitioners who employ them, as either white hat SEO, or black hat SEO. White hats tend to produce results that last a long time, whereas black hats anticipate that their sites may eventually be banned either temporarily or permanently once the search engines discover what they are doing.
An SEO technique is considered white hat if it conforms to the search engines' guidelines and involves no deception. As the search engine guidelines[17][18][19][31] are not written as a series of rules or commandments, this is an important distinction to note. White hat SEO is not just about following guidelines, but is about ensuring that the content a search engine indexes and subsequently ranks is the same content a user will see. White hat advice is generally summed up as creating content for users, not for search engines, and then making that content easily accessible to the spiders, rather than attempting to trick the algorithm from its intended purpose. White hat SEO is in many ways similar to web development that promotes accessibility, although the two are not identical.
Black hat SEO attempts to improve rankings in ways that are disapproved of by the search engines, or involve deception. One black hat technique uses text that is hidden, either as text colored similar to the background, in an invisible div, or positioned off screen. Another method gives a different page depending on whether the page is being requested by a human visitor or a search engine, a technique known as cloaking.
Search engines may penalize sites they discover using black hat methods, either by reducing their rankings or eliminating their listings from their databases altogether. Such penalties can be applied either automatically by the search engines' algorithms, or by a manual site review. Infamous examples are the February 2006 Google removal of both BMW Germany and Ricoh Germany for use of deceptive practices. and the April 2006 removal of the PPC Agency BigMouthMedia. All three companies, however, quickly apologized, fixed the offending pages, and were restored to Google's list.
As a marketing strategy
Eye tracking studies have shown that searchers scan a search results page from top to bottom and left to right (for left to right languages), looking for a relevant result. Placement at or near the top of the rankings therefore increases the number of searchers who will visit a site. However, more search engine referrals does not guarantee more sales. SEO is not necessarily an appropriate strategy for every website, and other Internet marketing strategies can be much more effective, depending on the site operator's goals. A successful Internet marketing campaign may drive organic traffic to web pages, but it also may involve the use of paid advertising on search engines and other pages, building high quality web pages to engage and persuade, addressing technical issues that may keep search engines from crawling and indexing those sites, setting up analytics programs to enable site owners to measure their successes, and improving a site's conversion rate
SEO may generate a return on investment. However, search engines are not paid for organic search traffic, their algorithms change, and there are no guarantees of continued referrals. Due to this lack of guarantees and certainty, a business that relies heavily on search engine traffic can suffer major losses if the search engines stop sending visitors. It is considered wise business practice for website operators to liberate themselves from dependence on search engine traffic. A top-ranked SEO blog has reported, "Search marketers, in a twist of irony, receive a very small share of their traffic from search engines." Instead, their main sources of traffic are links from other websites.
International markets
Optimization techniques are highly tuned to the dominant search engines in the target market. The search engines' market shares vary from market to market, as does competition. In 2003, Danny Sullivan stated that Google represented about 75% of all searches. In markets outside the United States, Google's share is often larger, and Google remains the dominant search engine worldwide as of 2007. As of 2006, Google had an 85-90% market share in Germany. While there were hundreds of SEO firms in the US at that time, there were only about five in Germany. As of June 2008, the marketshare of Google in the UK was close to 90% according to Hitwise.. That market share is achieved in a number of countries.
As of 2009, there are only a few large markets where Google is not the leading search engine. In most cases, when Google is not leading in a given market, it is lagging behind a local player. The most notable markets where this is the case are China, Japan, South Korea, Russia and Czech republic where respectively Baidu, Yahoo! Japan, Naver, Yandex and Seznam are market leaders.
Successful search optimization for international markets may require professional translation of web pages, registration of a domain name with a top level domain in the target market, and web hosting that provides a local IP address. Otherwise, the fundamental elements of search optimization are essentially the same, regardless of language.
Legal precedents
On October 17, 2002, SearchKing filed suit in the United States District Court, Western District of Oklahoma, against the search engine Google. SearchKing's claim was that Google's tactics to prevent spamdexing constituted a tortious interference with contractual relations. On May 27, 2003, the court granted Google's motion to dismiss the complaint because SearchKing "failed to state a claim upon which relief may be granted."
In March 2006, KinderStart filed a lawsuit against Google over search engine rankings. Kinderstart's web site was removed from Google's index prior to the lawsuit and the amount of traffic to the site dropped by 70%. On March 16, 2007 the United States District Court for the Northern District of California (San Jose Division) dismissed KinderStart's complaint without leave to amend, and partially granted Google's motion for Rule 11 sanctions against KinderStart's attorney, requiring him to pay part of Google's legal expenses.


Real user monitoring

Real user monitoring (RUM) is a passive web monitoring technology that records all user interaction with a website. Monitoring actual user interaction with a website is important to website operators to determine if users are being served quickly, error free and if not which part of a business process is failing. Software as a Service (SaaS) and Application Service Providers (ASP) use RUM to monitor and manage service quality delivered to their clients. Real user monitoring data is used to determine the actual service-level quality delivered to end-users and to detect errors or slowdowns on web sites. The data may also be used to determine if changes that are promulgated to sites has the desired effect or causes errors.
Organizations also use RUM to test website changes prior to deployment by monitoring for errors or slowdowns in the pre-deployment phase, they may also use it to test changes to production websites, or to anticipate behavioural changes in a website. For example a website may add an area where users would be likely to congregate before moving forward in a group (test takers logging into a website over twenty minutes and then simultaneously beginning a test for example), this is called rendezvous in test environments. Changes to websites such as these can be tested with RUM.
Real user monitoring is typically "passive monitoring," i.e. the RUM device collects web traffic without having any effect on the operation of the site. In some limited cases it also uses Javascript injected into a page to provide feedback from the browser.
Passive monitoring can be very helpful in troubleshooting performance problems once they have occurred. Passive monitoring differs from synthetic monitoring in that it relies on actual inbound and outbound web traffic to take measurements.


Cashback website

How they work
The cashback website will receive a commission payment from the retailer and once the purchase, free trial, or information request is confirmed will then share this payment with the customer who made the purchase. This means that the cashback site makes a profit on the sale, free trial or information request and the consumer gets to recoup some of the initial outlay back or if they had a free trial or information request earn some money from it.
Users can earn money on many types of online purchase. These can range from consumer electronics to finance products and furniture. The most profitable cashback deals often involve finance related goods such as insurance products, loans & mortgages where significant amounts of cashback can be generated from a single purchase. Other retailers such as expensive fashion retailers often tend to carry high percentage cashback rebates due to their profit margins.
Some cashback websites often place a threshold on when a customer can withdraw their earnings, thus making it necessary for a customer to return to the cashback site to add to their cashback earnings in order to reach the threshold target, driving loyalty to the site. This is particularly important since many of the shopping websites advertised through cashback websites would rather customers visit their site directly.
Some cashback websites do not require a threshold on when a customer can withdraw their earnings.
When the member's earnings have reached the threshold they can request a payment from the cashback website. Usually this will be paid via BACS (bank transfer) or there may be the option of being paid in gift vouchers. The gift vouchers are usually obtained at a trade price, and so the cashback website saves money by paying people in online gift vouchers. These vouchers can be sent using e-mail to the customers, and consist of a code of numbers and letters, which when entered on certain online stores can be used to deduct money from the final cost of their order.
Many cashback sites incentivise their customers further by offering them benefits or cash to refer others to the site. This provides the benefit both to the cashback website and the customer as it helps the customer to build up their cashback and helps the cashback website to grow by signing up new members without having to worry about spending money on advertising.
Cashback websites also enable consumers to save money online by providing them with various coupons and discounts. Bigger cashback websites will even feature exclusive coupons available only at those sites. By combining the coupons and discounts with the cash back rewards, consumers can save even more.
The number of cashback websites has increased due to the presence of companies like Tradedoubler and Commission Junction which make it relatively easy to create this kind of website.Other low barriers to entry mean there are many new companies still entering the market.


Style sheet (web development)

>> Wednesday, May 13, 2009

Web style sheets are a form of separation of presentation and content for web design in which the markup (i.e., HTML or XHTML) of a webpage contains the page's semantic content and structure, but does not define its visual layout (style). Instead, the style is defined in an external stylesheet file using a language such as CSS or XSL. This design approach is identified as a "separation" because it largely supersedes the antecedent methodology in which a page's markup defined both style and structure.
The philosophy underlying this methodology is a specific case of separation of concerns.
Separation of style and content has many benefits, but has only become practical in recent years due to improvements in popular web browsers' CSS implementations.
Overall, users experience of a site utilising style sheets will generally be quicker than sites that don’t use the technology. ‘Overall’ as the first page will probably load more slowly – because the style sheet AND the content will need to be transferred. Subsequent pages will load faster because no style information will need to be downloaded – the CSS file will already be in the browser's cache.
Holding all the presentation styles in one file significantly reduces maintenance time and reduces the chance of human errors, thereby improving presentation consistency. For example, the font color associated with a type of text element may be specified — and therefore easily modified — throughout an entire website simply by changing one short string of characters in a single file. The alternate approach, using styles embedded in each individual page, would require a cumbersome, time consuming, and error-prone edit of every file.
Sites that use CSS with either XHTML or HTML are easier to tweak so that they appear extremely similar in different browsers (Internet Explorer, Mozilla Firefox, Opera, Safari, etc.).
Sites using CSS "degrade gracefully" in browsers unable to display graphical content, such as Lynx, or those so very old that they cannot use CSS. Browsers ignore CSS that they do not understand, such as CSS 3 statements. This enables a wide variety of user agents to be able to access the content of a site even if they cannot render the stylesheet or are not designed with graphical capability in mind. For example, a browser using a refreshable braille display for output could disregard layout information entirely, and the user would still have access to all page content.
If a page's layout information is all stored externally, a user can decide to disable the layout information entirely, leaving the site's bare content still in a readable form. Site authors may also offer multiple stylesheets, which can be used to completely change the appearance of the site without altering any of its content.
Most modern web browsers also allow the user to define their own stylesheet, which can include rules that override the author's layout rules. This allows users, for example, to bold every hyperlink on every page they visit.
Because the semantic file contains only the meanings an author intends to convey, the styling of the various elements of the document's content is very consistent. For example, headings, emphasized text, lists and mathematical expressions all receive consistently applied style properties from the external stylesheet. Authors need not concern themselves with the style properties at the time of composition. These presentational details can be deferred until the moment of presentation.
The deferment of presentational details until the time of presentation means that a document can be easily re-purposed for an entirely different presentation medium with merely the application of a new stylesheet already prepared for the new medium and consistent with elemental or structural vocabulary of the semantic document. A carefully authored document for a web page can easily be printed to a hard-bound volume complete with headers and footers, page numbers and a generated table of contents simply by applying a new stylesheet.
Practical disadvantages today
Currently specifications (for example, XHTML, XSL, CSS) and software tools implementing these specification are only reaching the early stages of maturity. So there are some practical issues facing authors who seek to embrace this method of separating content and style.
Complex layouts
One of the practical problems is the lack of proper support for style languages in major browsers. Typical web page layouts call for some tabular presentation of the major parts of the page such as menu navigation columns and header bars, navigation tabs, and so on. However, deficient support for CSS and XSL in major browsers forces authors to code these tables within their content rather than applying a tabular style to the content from the accompanying stylesheet.
Narrow adoption without the parsing and generation tools
While the style specifications are quite mature and still maturing, the software tools have been slow to adapt. Most of the major web development tools still embrace a mixed presentation-content model. So authors and designers looking for GUI based tools for their work find it difficult to follow the semantic web method. In addition to GUI tools, shared repositories for generalized stylesheets would probably aid adoption of these methods.


Comparison of stylesheet languages

The two primary stylesheet languages are Cascading Style Sheets (CSS) and the Extensible Stylesheet Language (XSL). While they are both called stylesheet languages, they have very different purposes and ways of going about their tasks.

Cascading Style Sheets
CSS is designed around styling HTML and XML (including XHTML) documents. It was created for that purpose. It uses a special, non-XML syntax for defining the styling information for the various elements of the document that it styles.
CSS, as of version 2.1, is best used for styling documents that are to be shown on "screen media". That is, media displayed as a single page (possibly with hyperlinks) that has a fixed horizontal width but a virtually unlimited vertical height. Scrolling is often the method of choice for viewing parts of screen media. This is in contrast to "paged media", which has multiple pages, each with specific fixed horizontal and vertical dimensions. Styling paged media involves a variety of complexities that screen media does not. Since CSS was designed originally for screen media, its paged facilities are lacking.
CSS version 3.0 provides new features that allow CSS to more adequately style documents for paged display.
Extensible Stylesheet Language
XSL has evolved drastically from its initial design into something very different from its original purpose. The original idea for XSL was to create an XML-based styling language directed towards paged display media. The mechanism they used to accomplish this task was to divide the process into two distinct steps.
First, the XML document would be transformed into an intermediate form. The process for performing this transformation would be governed by the XSL stylesheet, as defined by the XSL specification. The result of this transformation would be an XML document in an intermediate language, known as XSL-FO (also defined by the XSL specification).
However, in the process of designing the transformation step, it was realized that a generic XML transformation language would be useful for more than merely creating a presentation of an XML document. As such, a new working group was split off from the XSL working group, and the XSL Transformations (XSLT) language became something that was considered separate from the styling information of the XSL-FO document. Even that split was expanded when XPath became its own separate specification, though still strongly tied to XSLT.
The combination of XSLT and XSL-FO creates a powerful styling language, though much more complex than CSS. XSLT is a Turing complete language, while CSS is not; this demonstrates a degree of power and flexibility not found in CSS. Additionally, XSLT is capable of creating content, such as automatically creating a table of contents just from chapters in a book, or removing/selecting content, such as only generating a glossary from a book. XSLT version 1.0 with the EXSLT extensions, or XSLT version 2.0 is capable of generating multiple documents as well, such as dividing the chapters in a book into their own individual pages. By contrast, a CSS can only selectively remove content by not displaying it.
XSL-FO is unlike CSS in that the XSL-FO document stands alone. CSS modifies a document that attached to it, while the XSL-FO document (the result of the transformation by XSLT of the original document) contains all of the content to be presented in a purely presentational format. It has a wide range of specification options with regard to paged formatting and higher-quality typesetting. But it does not specify the pages themselves. The XSL-FO document must be passed through an XSL-FO processor utility that generates the final paged media, much like HTML+CSS must pass through a web browser to be displayed in its formatted state.
The complexity of XSL-FO is a problem, largely because implementing an FO processor is very difficult. CSS implementations in web browsers are still not entirely compatible with one another, and it is much simpler than writing an FO processor. However, for richly specified paged media, such complexity is ultimately required in order to be able to solve various typesetting problems.



Adobe Dreamweaver (formerly Macromedia Dreamweaver) is a web development application originally created by Macromedia, and is now developed by Adobe Systems, which acquired Macromedia in 2005.
Dreamweaver is available for both Mac and Windows operating systems. Recent versions have incorporated support for web technologies such as CSS, JavaScript, and various server-side scripting languages and frameworks including ASP, ColdFusion, and PHP.
Although a hybrid WYSIWYG and code-based web design and development application, Dreamweaver's WYSIWYG mode can hide the HTML code details of pages from the user, making it possible for non-coders to create web pages and sites. One criticism of this approach is that it has the potential to produce HTML pages whose file size and amount of HTML code is larger than an optimally hand-coded page would be, which can cause web browsers to perform poorly. This can be particularly true because the application makes it very easy to create table-based layouts. In addition, some web site developers have criticized Dreamweaver in the past for producing code that often does not comply with W3C standards, though recent versions have been more compliant. Dreamweaver 8.0 performed poorly on the Acid2 Test, developed by the Web Standards Project. However, Adobe has focused on support for standards-based layout in recent and current versions of the application, including the ability to convert tables to layers.
Dreamweaver allows users to preview websites in locally-installed web browsers. It also has site management tools, such as FTP/SFTP and WebDAV file transfer and synchronization features, the ability to find and replace lines of text or code by search terms and regular expressions across the entire site, and a templating feature that allows single-source update of shared code and layout across entire sites without server-side includes or scripting. The behaviours panel also enables use of basic JavaScript without any coding knowledge, and integration with Adobe's Spry AJAX framework offers easy access to dynamically-generated content and interfaces.
Dreamweaver can utilize third-party "Extensions" to enable and extend core functionality of the application, which any web developer can write (largely in HTML and JavaScript). Dreamweaver is supported by a large community of extension developers who make extensions available (both commercial and free) for most web development tasks from simple rollover effects to full-featured shopping carts.
Like other HTML editors, Dreamweaver edits files locally, then uploads all edited files to the remote web server using FTP, SFTP, or WebDAV. Dreamweaver CS4 now supports the Subversion (SVN) version control system.
Syntax highlighting
As of version 6, Dreamweaver supports syntax highlighting for the following languages out of the box:
• ActionScript
• Active Server Pages (ASP).
• C#
• Cascading Style Sheets (CSS)
• ColdFusion
• Extensible HyperText Markup Language (XHTML)
• Extensible Markup Language (XML)
• Extensible Stylesheet Language Transformations (XSLT)
• HyperText Markup Language (HTML)
• Java
• JavaScript
• JavaServer Pages (JSP)
• PHP: Hypertext Preprocessor (PHP)
• Visual Basic (VB)
• Visual Basic Script Edition (VBScript)
• Wireless Markup Language (WML)
It is also possible to add your own language syntax highlighting to its repertoire.
In addition, code completion is available for many of these languages.
Version history
CS3 Icon
Provider Major Version Minor/Alternate Name Release date Notes
Macromedia 1.0 1.0 December 1997 Initial release
1.2 March 1998
2.0 2.0 December 1998
3.0 3.0 December 1999
UltraDev 1.0 June 1999
4.0 4.0 December 2000
UltraDev 4.0 December 2000
6.0 MX May 29, 2002
7.0 MX 2004 September 10, 2003
8.0 [1]
8.0 September 13, 2005
Adobe 9.0 CS3
April 16, 2007 Replaced Adobe GoLive in the Creative Suite series
10.0 CS4
September 23, 2008


Cascading Style Sheets

Cascading Style Sheets (CSS) is a style sheet language used to describe the presentation (that is, the look and formatting) of a document written in a markup language. Its most common application is to style web pages written in HTML and XHTML, but the language can be applied to any kind of XML document, including SVG and XUL.
CSS is designed primarily to enable the separation of document content (written in HTML or a similar markup language) from document presentation, including elements such as the colors, fonts, and layout. This separation can improve content accessibility, provide more flexibility and control in the specification of presentation characteristics, enable multiple pages to share formatting, and reduce complexity and repetition in the structural content (such as by allowing for tableless web design). CSS can also allow the same markup page to be presented in different styles for different rendering methods, such as on-screen, in print, by voice (when read out by a speech-based browser or screen reader) and on Braille-based, tactile devices. While the author of a document typically links that document to a CSS stylesheet, readers can use a different stylesheet, perhaps one on their own computer, to override the one the author has specified.
CSS specifies a priority scheme to determine which style rules apply if more than one rule matches against a particular element. In this so-called cascade, priorities or weights are calculated and assigned to rules, so that the results are predictable.
The CSS specifications are maintained by the World Wide Web Consortium (W3C). Internet media type (MIME type) text/css is registered for use with CSS by RFC 231
CSS has a simple syntax, and uses a number of English keywords to specify the names of various style properties.
A style sheet consists of a list of rules. Each rule or rule-set consists of one or more selectors and a declaration block. A declaration-block consists of a list of semicolon-separated declarations in braces. Each declaration itself consists of a property, a colon (:), a value, then a semi-colon (;)
In CSS, selectors are used to declare which elements a style applies to, a kind of match expression. Selectors may apply to all elements of a specific type, or only those elements which match a certain attribute; elements may be matched depending on how they are placed relative to each other in the markup code, or on how they are nested within the document object model.
In addition to these, a set of pseudo-classes can be used to define further behavior. Probably the best-known of these is :hover, which applies a style only when the user 'points to' the visible element, usually by holding the mouse cursor over it. It is appended to a selector as in a:hover or #elementid:hover. Other pseudo-classes and pseudo-elements are, for example, :first-line, :visited or :before. A special pseudo-class is :lang(c), "c".
A pseudo-class selects entire elements, such as :link or :visited, whereas a pseudo-element makes a selection that may consist of partial elements, such as :first-line or :first-letter.
Selectors may be combined in other ways too, especially in CSS 2.1, to achieve greater specificity and flexibility
Use of CSS
Prior to CSS, nearly all of the presentational attributes of HTML documents were contained within the HTML markup; all font colors, background styles, element alignments, borders and sizes had to be explicitly described, often repeatedly, within the HTML. CSS allows authors to move much of that information to a separate stylesheet resulting in considerably simpler HTML markup.
Headings (h1 elements), sub-headings (h2), sub-sub-headings (h3), etc., are defined structurally using HTML. In print and on the screen, choice of font, size, color and emphasis for these elements is presentational.
Prior to CSS, document authors who wanted to assign such typographic characteristics to, say, all h2 headings had to use the HTML font and other presentational elements for each occurrence of that heading type. The additional presentational markup in the HTML made documents more complex, and generally more difficult to maintain. In CSS, presentation is separated from structure. In print, CSS can define color, font, text alignment, size, borders, spacing, layout and many other typographic characteristics. It can do so independently for on-screen and printed views. CSS also defines non-visual styles such as the speed and emphasis with which text is read out by aural text readers. The W3C now considers the advantages of CSS for defining all aspects of the presentation of HTML pages to be superior to other methods. It has therefore deprecated the use of all the original presentational HTML markup.
CSS information can be provided by various sources. CSS style information can be either attached as a separate document or embedded in the HTML document. Multiple style sheets can be imported. Different styles can be applied depending on the output device being used; for example, the screen version can be quite different from the printed version, so that authors can tailor the presentation appropriately for each medium.
• Author styles (style information provided by the web page author), in the form of
o external stylesheets, i.e. a separate CSS-file referenced from the document
o embedded style, blocks of CSS information inside the HTML document itself
o inline styles, inside the HTML document, style information on a single element, specified using the "style" attribute.
• User style
o a local CSS-file specified by the user using options in the web browser, and acting as an override, to be applied to all documents.
• User agent style
o the default style sheet applied by the user agent, e.g. the browser's default presentation of elements.
One of the goals of CSS is also to allow users a greater degree of control over presentation; those who find the red italic headings difficult to read may apply other style sheets to the document. Depending on their browser and the web site, a user may choose from various stylesheets provided by the designers, may remove all added style and view the site using their browser's default styling or may perhaps override just the red italic heading style without altering other attributes.
File highlightheaders.css containing:
h1 { color: white; background: orange !important; }
h2 { color: white; background: green !important; }
Such a file is stored locally and is applicable if that has been specified in the browser options. "!important" means that it prevails over the author specifications.
Style sheets have existed in one form or another since the beginnings of SGML in the 1970s. Cascading Style Sheets were developed as a means for creating a consistent approach to providing style information for web documents.
As HTML grew, it came to encompass a wider variety of stylistic capabilities to meet the demands of web developers. This evolution gave the designer more control over site appearance but at the cost of HTML becoming more complex to write and maintain. Variations in web browser implementations made consistent site appearance difficult, and users had less control over how web content was displayed.
To improve the capabilities of web presentation, nine different style sheet languages were proposed to the W3C's www-style mailing list. Of the nine proposals, two were chosen as the foundation for what became CSS: Cascading HTML Style Sheets (CHSS) and Stream-based Style Sheet Proposal (SSP). First, Håkon Wium Lie (now the CTO of Opera Software) proposed Cascading HTML Style Sheets (CHSS) in October 1994, a language which has some resemblance to today's CSS. Bert Bos was working on a browser called Argo which used its own style sheet language, Stream-based Style Sheet Proposal (SSP). Lie and Bos worked together to develop the CSS standard (the 'H' was removed from the name because these style sheets could be applied to other markup languages besides HTML).
Unlike existing style languages like DSSSL and FOSI, CSS allowed a document's style to be influenced by multiple style sheets. One style sheet could inherit or "cascade" from another, permitting a mixture of stylistic preferences controlled equally by the site designer and user.
Håkon's proposal was presented at the "Mosaic and the Web" conference in Chicago, Illinois in 1994, and again with Bert Bos in 1995. Around this time, the World Wide Web Consortium was being established; the W3C took an interest in the development of CSS, and it organized a workshop toward that end chaired by Steven Pemberton. This resulted in W3C adding work on CSS to the deliverables of the HTML editorial review board (ERB). Håkon and Bert were the primary technical staff on this aspect of the project, with additional members, including Thomas Reardon of Microsoft, participating as well. By the end of 1996, CSS was ready to become official, and the CSS level 1 Recommendation was published in December.
Development of HTML, CSS, and the DOM had all been taking place in one group, the HTML Editorial Review Board (ERB). Early in 1997, the ERB was split into three working groups: HTML Working group, chaired by Dan Connolly of W3C; DOM Working group, chaired by Lauren Wood of SoftQuad; and CSS Working group, chaired by Chris Lilley of W3C.
The CSS Working Group began tackling issues that had not been addressed with CSS level 1, resulting in the creation of CSS level 2 on November 4, 1997. It was published as a W3C Recommendation on May 12, 1998. CSS level 3, which was started in 1998, is still under development as of 2009.
In 2005 the CSS Working Groups decided to enforce the requirements for standards more strictly. This meant that already published standards like CSS 2.1, CSS 3 Selectors and CSS 3 Text were pulled back from Candidate Recommendation to Working Draft level.
Difficulty with adoption
Although the CSS1 specification was completed in 1996 and Microsoft's Internet Explorer 3 was released in that year featuring some limited support for CSS, it would be more than three years before any web browser achieved near-full implementation of the specification. Internet Explorer 5.0 for the Macintosh, shipped in March 2000, was the first browser to have full (better than 99 percent) CSS1 support[citation needed], surpassing Opera, which had been the leader since its introduction of CSS support 15 months earlier. Other browsers followed soon afterwards, and many of them additionally implemented parts of CSS2. As of July 2008, no (finished) browser has fully implemented CSS2, with implementation levels varying (see Comparison of layout engines (CSS)).
Even though early browsers such as Internet Explorer 3 and 4, and Netscape 4.x had support for CSS, it was typically incomplete and afflicted with serious bugs. This was a serious obstacle for the adoption of CSS.
When later 'version 5' browsers began to offer a fairly full implementation of CSS, they were still incorrect in certain areas and were fraught with inconsistencies, bugs and other quirks. The proliferation of such CSS-related inconsistencies and even the variation in feature support has made it difficult for designers to achieve a consistent appearance across platforms. Some authors commonly resort to using some workarounds such as CSS hacks and CSS filters in order to obtain consistent results across web browsers and platforms.
Problems with browsers' patchy adoption of CSS along with errata in the original specification led the W3C to revise the CSS2 standard into CSS2.1, which may be regarded as something nearer to a working snapshot of current CSS support in HTML browsers. Some CSS2 properties which no browser had successfully implemented were dropped, and in a few cases, defined behaviours were changed to bring the standard into line with the predominant existing implementations. CSS2.1 became a Candidate Recommendation on February 25, 2004, but CSS2.1 was pulled back to Working Draft status on June 13, 2005, and only returned to Candidate Recommendation status on July 19, 2007.
In the past, some web servers were configured to serve all documents with the filename extension .css as mime type application/x-pointplus rather than text/css. At the time, the Net-Scene company was selling PointPlus Maker to convert PowerPoint files into Compact Slide Show files (using a .css extension).
CSS has various levels and profiles. Each level of CSS builds upon the last, typically adding new features and typically denoted as CSS1, CSS2, and CSS3. Profiles are typically a subset of one or more levels of CSS built for a particular device or user interface. Currently there are profiles for mobile devices, printers, and television sets. Profiles should not be confused with media types which were added in CSS2.
The first CSS specification to become an official W3C Recommendation is CSS level 1, published in December 1996. Among its capabilities are support for:
• Font properties such as typeface and emphasis
• Color of text, backgrounds, and other elements
• Text attributes such as spacing between words, letters, and lines of text
• Alignment of text, images, tables and other elements
• Margin, border, padding, and positioning for most elements
• Unique identification and generic classification of groups of attributes
The W3C maintains the CSS1 Recommendation.
CSS level 2 was developed by the W3C and published as a Recommendation in May 1998. A superset of CSS1, CSS2 includes a number of new capabilities like absolute, relative, and fixed positioning of elements, the concept of media types, support for aural style sheets and bidirectional text, and new font properties such as shadows. The W3C maintains the CSS2 Recommendation.
CSS level 2 revision 1 or CSS 2.1 fixes errors in CSS2, removes poorly-supported features and adds already-implemented browser extensions to the specification. While it was a Candidate Recommendation for several months, on June 15, 2005 it was reverted to a working draft for further review. It was returned to Candidate Recommendation status on 19 July 2007.
CSS level 3 is currently under development. The W3C maintains a CSS3 progress report. CSS3 is modularized and will consist of several separate Recommendations. The W3C CSS3 Roadmap provides a summary and introduction.
Browser support
A CSS filteris a coding technique that aims at hiding or showing parts of the CSS to different browsers, either by exploiting CSS-handling quirks or bugs in the browser, or by taking advantage of lack of support for parts of the CSS specifications. Using CSS filters, some designers have gone as far as delivering entirely different CSS to certain browsers in order to ensure that designs are rendered as expected. Because very early web browsers were either completely incapable of handling CSS, or render CSS very poorly, designers today often routinely use CSS filters that completely prevent these browsers from accessing any of the CSS. Internet Explorer support for CSS began with IE 3.0 and increased progressively with each version. By 2008, the first Beta of Internet Explorer 8 offered support for CSS 2.1 in its best web standards mode.
An example of a well-known CSS browser bug is the Internet Explorer box model bug, where box widths are interpreted incorrectly in several versions of the browser, resulting in blocks which are too narrow when viewed in Internet Explorer, but correct in standards-compliant browsers. The bug can be avoided in Internet Explorer 6 by using the correct doctype in (X)HTML documents. CSS hacks and CSS filters are used to compensate for bugs such as this, just one of hundreds of CSS bugs that have been documented in various versions of Netscape, Mozilla Firefox, Opera, and Internet Explorer (including Internet Explorer 7).
Even when the availability of CSS-capable browsers made CSS a viable technology, the adoption of CSS was still held back by designers' struggles with browsers' incorrect CSS implementation and patchy CSS support. Even today, these problems continue to make the business of CSS design more complex and costly than it should be, and cross-browser testing remains a necessity. Other reasons for continuing non-adoption of CSS are: its perceived complexity, authors' lack of familiarity with CSS syntax and required techniques, poor support from authoring tools, the risks posed by inconsistency between browsers and the increased costs of testing.
Currently there is strong competition between Mozilla's Gecko layout engine used in Firefox, the WebKit layout engine used in Apple Safari and Google Chrome, the similar KHTML engine used in KDE's Konqueror browser, and Opera's Presto layout engine - each of them is leading in different aspects of CSS. As of April 2009, Internet Explorer 8 has the most complete implementation of CSS 2.1 according to one source, scoring 99%.
Some noted disadvantages of using "pure" CSS include:
Inconsistent browser support
Different browsers will render CSS layout differently as a result of browser bugs or lack of support for CSS features. For example Microsoft Internet Explorer, whose older versions, such as IE 6.0, implemented many CSS 2.0 properties in its own, incompatible way, misinterpreted a significant number of important properties, such as width, height, and float. Numerous so-called CSS "hacks" must be implemented to achieve consistent layout among the most popular or commonly used browsers. Pixel precise layouts can sometimes be impossible to achieve across browsers.
Selectors are unable to ascend
CSS offers no way to select a parent or ancestor of element that satisfies certain criteria. A more advanced selector scheme (such as XPath) would enable more sophisticated stylesheets. However, the major reasons for the CSS Working Group rejecting proposals for parent selectors are related to browser performance and incremental rendering issues.
One block declaration cannot explicitly inherit from another
Inheritance of styles is performed by the browser based on the containment hierarchy of DOM elements and the specificity of the rule selectors, as suggested by the section 6.4.1 of the CSS2 specification. Only the user of the blocks can refer to them by including class names into the class attribute of a DOM element.
Vertical control limitations
While horizontal placement of elements is generally easy to control, vertical placement is frequently unintuitive, convoluted, or impossible. Simple tasks, such as centering an element vertically or getting a footer to be placed no higher than bottom of viewport, either require complicated and unintuitive style rules, or simple but widely unsupported rules.
Absence of expressions
There is currently no ability to specify property values as simple expressions (such as margin-left: 10% - 3em + 4px;). This is useful in a variety of cases, such as calculating the size of columns subject to a constraint on the sum of all columns. However, a working draft with a calc() value to address this limitation has been published by the CSS WG. Internet Explorer versions 5 to 7 support a proprietary expression() statement, with similar functionality. This proprietary expression() statement is no longer supported from Internet Explorer 8 onwards, except in compatibility modes. This decision was taken for "standards compliance, browser performance, and security reasons".
Lack of orthogonality
Multiple properties often end up doing the same job. For instance, position, display and float specify the placement model, and most of the time they cannot be combined meaningfully. A display: table-cell element cannot be floated or given position: relative, and an element with float: left should not react to changes of display. In addition, some properties are not defined in a flexible way that avoids creation of new properties. For example, you should use the "border-spacing" property on table element instead of the "margin-*" property on table cell elements. This is because according to the CSS specification, internal table elements do not have margins.
Margin collapsing
Margin collapsing is, while well-documented and useful, also complicated and is frequently not expected by authors, and no simple side-effect-free way is available to control it.
Float containment
CSS does not explicitly offer any property that would force an element to contain floats. Multiple properties offer this functionality as a side effect, but none of them are completely appropriate in all situations. As there will be an overflow when the elements, which is contained in a container, use float property. Generally, either "position: relative" or "overflow: hidden"solves this. Floats will be different according to the web browser size and resolution, but positions can not.
Lack of multiple backgrounds per element
Highly graphical designs require several background images for every element, and CSS can support only one. Therefore, developers have to choose between adding redundant wrappers around document elements, or dropping the visual effect. This is partially addressed in the working draft of the CSS3 backgrounds module, which is already supported in Safari and Konqueror.
Control of Element Shapes
CSS currently only offers rectangular shapes. Rounded corners or other shapes may require non-semantic markup. However, this is addressed in the working draft of the CSS3 backgrounds module.
Lack of Variables
CSS contains no variables. This makes it necessary to use error-prone "replace-all" techniques to change fundamental constants, such as the color scheme or various heights and widths. Server-side generation of CSS scripts, using for example PHP, can help to mitigate this problem.
Lack of column declaration
While possible in current CSS, layouts with multiple columns can be complex to implement. With the current CSS, the process is often done using floating elements which are often rendered differently by different browsers, different computer screen shapes, and different screen ratios set on standard monitors.
Cannot explicitly declare new scope independently of position
Scoping rules for properties such as z-index look for the closest parent element with a position:absolute or position:relative attribute. This odd coupling has two undesired effects: 1) it is impossible to avoid declaring a new scope when one is forced to adjust an element's position, preventing one from using the desired scope of a parent element and 2) users are often not aware that they must declare position:relative or position:absolute on any element they want to act as "the new scope". Additionally, a bug in the Firefox browser prevents one from declaring table elements as a new css scope using position:relative (one can technically do so, but numerous graphical glitches result).
Poor Layout Controls for Flexible Layouts
While new additions to CSS3 provide a stronger, more robust layout feature-set, CSS is still very much rooted as a styling language, not a layout language.
By combining CSS with the functionality of a Content Management System, a considerable amount of flexibility can be programmed into content submission forms. This allows a contributor, who may not be familiar or able to understand or edit CSS or HTML code to select the layout of an article or other page they are submitting on-the-fly, in the same form. For instance, a contributor, editor or author of an article or page might be able to select the number of columns and whether or not the page or article will carry an image. This information is then passed to the Content Management System, and the program logic will evaluate the information and determine, based on a certain number of combinations, how to apply classes and IDs to the HTML elements, therefore styling and positioning them according to the pre-defined CSS for that particular layout type. When working with large-scale, complex sites, with many contributors such as news and informational sites, this advantage weighs heavily on the feasibility and maintenance of the project.
Separation of Content from Presentation
CSS facilitates publication of content in multiple presentation formats based on nominal parameters. Nominal parameters include explicit user preferences, different web browsers, the type of device being used to view the content (a desktop computer or mobile Internet device), the geographic location of the user and many other variables.
Site-wide consistency
When CSS is used effectively, in terms of inheritance and "cascading," a global stylesheet can be used to affect and style elements site-wide. If the situation arises that the styling of the elements should need to be changed or adjusted, these changes can be made easily, simply by editing a few rules in the global stylesheet. Before CSS, this sort of maintenance was more difficult, expensive and time-consuming.
A stylesheet will usually be stored in the browser cache, and can therefore be used on multiple pages without being reloaded, increasing download speeds and reducing data transfer over a network.
Page reformatting
With a simple change of one line, a different stylesheet can be used for the same page. This has advantages for accessibility, as well as providing the ability to tailor a page or site to different target devices. Furthermore, devices not able to understand the styling will still display the content.


  © Blogger templates Palm by 2008

Back to TOP