Kamis, 09 Januari 2014

Uniform Resource Identifier (URI)





Uniform Resource Identifier (URI) is a string of characters used to identify a name of a web resource. Such identification enables interaction with representations of the web resource over a network (typically the World Wide Web) using specific protocols. Schemes specifying a concrete syntax and associated protocols define each URI.

Relationship to URL and URN

URIs can be classified as locators (URLs), as names (URNs), or as both. A uniform resource name (URN) functions like a person's name, while a uniform resource locator (URL) resembles that person's street address. In other words: the URN defines an item's identity, while the URL provides a method for finding it.

The ISBN system for uniquely identifying books provides a typical example of the use of URNs. ISBN 0-486-27557-4 (urn:isbn:0-486-27557-4) cites unambiguously a specific edition of Shakespeare's play Romeo and Juliet. To gain access to this object and read the book, one needs its location: a URL address. A typical URL for this book on a Unix-like operating system would be a file path such as file:///home/username/books/RomeoAndJuliet.pdf, identifying the electronic book library saved on a local disk drive. So URNs and URLs have complementary purposes.

History About URI (Uniform Resource Identifier)

  • Refinement of specifications

In December 1994, RFC 1738 formally defined relative and absolute URLs, refined the general URL syntax, defined how to resolve relative URLs to absolute form, and better enumerated the URL schemes then in use. The agreed definition and syntax of URNs had to wait until the publication of RFC 2141 in May 1997.

The publication of RFC 2396 in August 1998 saw the URI syntax become a separate specification and most of the parts of RFCs 1630 and 1738 relating to URIs and URLs in general were revised and expanded by the IETF. The new RFC changed the significance of the "U" in "URI": it came to represent "Uniform" rather than "Universal". The sections of RFC 1738 that summarized existing URL schemes migrated into a separate document. IANA keeps a registry of those schemes, RFC 2717 first described the procedure to register them.

In December 1999, RFC 2732 provided a minor update to RFC 2396, allowing URIs to accommodate IPv6 addresses. Some time later, a number of shortcomings discovered in the two specifications led to the development of a number of draft revisions under the title rfc2396bis. This community effort, coordinated by RFC 2396 co-author Roy Fielding, culminated in the publication of RFC 3986 in January 2005. This RFC, as of 2009 the current version of the URI syntax recommended for use on the Internet, renders RFC 2396 obsolete. It does not, however, render the details of existing URL schemes obsolete; RFC 1738 continues to govern such schemes except where otherwise superseded – RFC 2616 for example, refines the 'http' scheme. Simultaneously, the IETF published the content of RFC 3986 as the full standard STD 66, reflecting the establishment of the URI generic syntax as an official Internet protocol.

In August 2002, RFC 3305 pointed out that the term 'URL' has, despite its widespread use in the vernacular of the Internet-aware public at large, faded into near obsolescence. It now serves only as a reminder that some URIs act as addresses because they have schemes that imply some kind of network accessibility, regardless of whether systems actually use them for that purpose. As URI-based standards such as Resource Description Framework make evident, resource identification need not suggest the retrieval of resource representations over the Internet, nor need they imply network-based resources at all.

On November 1, 2006, the W3C Technical Architecture Group published 'On Linking Alternative Representations To Enable Discovery And Publishing', a guide to best practices and canonical URIs for publishing multiple versions of a given resource. For example, content might differ by language or by size to adjust for capacity or settings of the device used to access that content.

The Semantic Web uses the HTTP URI scheme to identify both documents and concepts in the real world: this has caused confusion as to how to distinguish the two. The Technical Architecture Group of W3C (TAG) published an e-mail in June 2005 on how to solve this problem. The e-mail became known as the httpRange-14 resolution. To expand on this (rather brief) email, W3C published in March 2008 the Interest Group Note Cool URIs for the Semantic Web. This explains the use of content negotiation and the 303-redirect code in more detail.

  • Name, address, and identify resources

URIs and URLs have a shared history. In 1994, Tim Berners-Lee’s proposals for HyperText implicitly introduced the idea of a URL as a short string representing a resource that is the target of a hyperlink. At the time, people referred to it as a 'hypertext name' or 'document name'.

Over the next three and a half years, as the World Wide Web's core technologies of HTML (the HyperText Markup Language), HTTP, and web browsers developed, a need to distinguish a string that provided an address for a resource from a string that merely named a resource emerged. Although not yet formally defined, the term Uniform Resource Locator came to represent the former, and the more contentious Uniform Resource Name came to represent the latter.

During the debate over defining URLs and URNs it became evident that the two concepts embodied by the terms were merely aspects of the fundamental, overarching notion of resource identification. In June 1994, the IETF published Berners-Lee's RFC 1630: the first RFC that (in its non-normative text) acknowledged the existence of URLs and URNs, and, more importantly, defined a formal syntax for Universal Resource Identifiers — URL-like strings whose precise syntaxes and semantics depended on their schemes. In addition, this RFC attempted to summarize the syntaxes of URL schemes in use at the time. It also acknowledged, but did not standardize, the existence of relative URLs and fragment identifiers.

Example Of Absolut URI

http://example.org/absolute/URI/with/absolute/path/to/resource.txt
 

 

Rabu, 08 Januari 2014

Wi-fi (Wireless Fidelity)

Wi-fi (Wireless Fidelity) is a well-known technology that utilizes electronic equipment to exchange data wirelessly (using radio waves) through a computer network, including high-speed Internet connection. Wi-Fi Alliance Wi-Fi defines as "the product of wireless local area networks (WLAN) based on any standard Institute of Electrical and Electronics Engineers (IEEE) 802.11". Even so, since most WLANs today are based on these standards, the term "Wi-Fi" is used in common English as a synonym for "WLAN".

A tool that can use Wi-Fi (such as a personal computer, video game console, smart phone, tablet, or digital audio player) can be connected to a network resource such as the Internet via a wireless network access point. Access point (or hotspot) as it has a range of around 20 meters (65 feet) indoors and broader outdoors. Hotspot coverage can cover an area of ​​a room with walls that block radio waves or a few square miles - This can be done by using multiple access points overlap.

"Wi-Fi" is a trademark of Wi-Fi Alliance and the brand name for products using the IEEE 802.11 family of standards. Only products that complete the Wi-Fi interoperability certification testing Wi-Fi Alliance that may use the name and trademark "Wi-Fi CERTIFIED".

Wi-Fi has a history of security changes. First encryption systems, WEP, proved easily penetrated. Higher quality protocols, WPA and WPA2, then added. However, an optional feature that was added in 2007 called Wi-Fi Protected Setup (WPS), has loopholes that allow attackers to get WPA or WPA2 password router remotely within a few hours. A number of companies advised to turn off the WPS feature. Wi-Fi Alliance since the update test plans and certification program to ensure all new equipment certified AP PIN immune from harsh attacks.

Wi-fi History

802.11 technology history began at the U.S. Federal Communications Commission's decision in 1985 to release a GSM bands for unlicensed use. In 1991, NCR Corporation with AT & T find predecessors 802.11 intended for cashier systems. The first wireless products under the name WaveLAN.

Vic Hayes dubbed the "Father of Wi-Fi". He was involved in the design of the first IEEE standard.

A large number of patents by many companies using the 802.11 standard. In 1992 and 1996, CSIRO Australia organization obtained a patent for a method that would be used in the Wi-Fi signal to remove interference. In April 2009, 14 technology companies agreed to pay $ 250 million to CSIRO for violating their patents. This push Wi-Fi is touted as Australia's findings, although it has been a topic of some controversy. CSIRO won a lawsuit worth $ 220 million for patent infringement in 2012 Wi-Fi is asking global firms in the United States pay for the privilege to CSIRO's license valued at $ 1 billion.

In 1999, the Wi-Fi Alliance was formed as a trade association to hold the Wi-Fi trademark is used by many products.

Name

The term Wi-Fi, first used commercially in August 1999, coined by a brand consulting firm called Interbrand Corporation. Wi-Fi Alliance hired Interbrand to specify the name of the "easier said than 'IEEE 802.11b Direct Sequence'". Belanger also stated that Interbrand creates a Wi-Fi as a play on the Hi-Fi (high fidelity), they also designed the Wi-Fi logo.

Wi-Fi Alliance initially used the advertising slogan for Wi-Fi, "The Standard for Wireless Fidelity", but later removed from their marketing. Even so, a number of documents from the Alliance in 2003 and 2004 still use the term Wireless Fidelity. There has been no official statement regarding the removal of this term.

Logo yin-yang Wi-Fi interoperability certification indicates a product.

Technology non-Wi-Fi is required to keep the points such as Motorola Canopy are usually called fixed wireless. Alternative wireless technologies include mobile phone standards such as 2G, 3G, or 4G.

Wi-Fi Certification

IEEE does not test equipment to meet their standards. Nonprofit Wi-Fi Alliance was founded in 1999 to fill this gap - to establish and encourage standards of interoperability and backward compatibility, and to promote wireless local area network technology. In 2010, the Wi-Fi Alliance consisted of more than 375 companies around the world. Wi-Fi Alliance encourages the use of Wi-Fi brand to technologies based on the IEEE 802.11 standards from the Institute of Electrical and Electronics Engineers. This includes wireless local area network connection (WLAN) connectivity tool-to-tool (such as Wi-Fi Peer to Peer or Wi-Fi Direct), personal area network (PAN), local area network (LAN), and even the number of connections wide area network (WAN) is limited. Manufacturing companies with membership Wi-Fi Alliance, whose products successfully passed the certification process, the right to mark those products with the Wi-Fi logo.

Specifically, the certification process requires compliance with IEEE 802.11 radio standards, the WPA and WPA2 security standdar, and the EAP authentication standard. Includes optional certification testing IEEE 802.11 draft standards, interaction with cellular phone technology in converged devices, and security features, multimedia, and power savings.

Not all Wi-Fi equipment is sent to get the certification. The lack of Wi-Fi certification does not mean that a device is not compatible with other Wi-Fi devices. If the appliance is eligible or semi-compatible, Wi-Fi Alliance does not need to comment on their mention as a Wi-Fi device, though technically only certified devices are approved. Terms such as Super Wi-Fi, which was initiated by the Federal Communications Commission (FCC) to describe the U.S. plan UHF TV band network in the United States, may be approved or not.

(FTP) File Transfer Protocol



File Transfer Protocol (FTP) is a standard network protocol used to transfer files from one host to another host over a TCP-based network, such as the Internet.

FTP is built on a client-server architecture and uses separate control and data connections between the client and the server. FTP users may authenticate themselves using a clear-text sign-in protocol, normally in the form of a username and password.

But can connect anonymously if the server is configured to allow it. For secure transmission that hides (encrypts) the username and password, and encrypts the content, FTP is often secured with SSL/TLS ("FTPS"). SSH File Transfer Protocol ("SFTP") is sometimes also used instead, but is technologically different.

The first FTP client applications were command-line applications developed before operating systems had graphical user interfaces, and are still shipped with most Windows, Unix, and Linux operating systems. Dozens of FTP clients and automation utilities have since been developed for desktops, servers, mobile devices, and hardware, and FTP has been incorporated into hundreds of productivity applications, such as Web page editors.

The History From FTP (File Transfer Protocol)

The original specification for the File Transfer Protocol was written by Abhay Bhushan and published as RFC 114 on 16 April 1971. Until 1980, FTP ran on NCP, the predecessor of TCP/IP. The protocol was later replaced by a TCP/IP version, RFC 765 (June 1980) and RFC 959 (October 1985), the current specification. Several proposed standards amend RFC 959, for example RFC 2228 (June 1997) proposes security extensions and RFC 2428 (September 1998) adds support for IPv6 and defines a new type of passive mode.

Communication and  transfer

FTP may run in active or passive mode, which determines how the data connection is established. In active mode, the client creates a TCP control connection. In situations where the client is behind a firewall and unable to accept incoming TCP connections, passive mode may be used. In this mode, the client uses the control connection to send a PASV command to the server and then receives a server IP address and server port number from the server, which the client then uses to open a data connection from an arbitrary client port to the server IP address and server port number received. Both modes were updated in September 1998 to support IPv6. Further changes were introduced to the passive mode at that time, updating it to extended passive mode.

The server responds over the control connection with three-digit status codes in ASCII with an optional text message. For example "200" (or "200 OK") means that the last command was successful. The numbers represent the code for the response and the optional text represents a human-readable explanation or request (e.g. <Need account for storing file>). An ongoing transfer of file data over the data connection can be aborted using an interrupt message sent over the control connection.

While transferring data over the network, four data representations can be used:
  • ASCII mode: used for text. Data is converted, if needed, from the sending host's character representation to "8-bit ASCII" before transmission, and (again, if necessary) to the receiving host's character representation. As a consequence, this mode is inappropriate for files that contain data other than plain text.
  • Image mode (commonly called Binary mode): the sending machine sends each file byte for byte, and the recipient stores the bytestream as it receives it. (Image mode support has been recommended for all implementations of FTP).
  • EBCDIC mode: use for plain text between hosts using the EBCDIC character set. This mode is otherwise like ASCII mode.
  • Local mode: Allows two computers with identical setups to send data in a proprietary format without the need to convert it to ASCII
For text files, different format control and record structure options are provided. These features were designed to facilitate files containing Telnet or ASA

Data transfer can be done in any of three modes:
  • Stream mode: Data is sent as a continuous stream, relieving FTP from doing any processing. Rather, all processing is left up to TCP. No End-of-file indicator is needed, unless the data is divided into records.
  • Block mode: FTP breaks the data into several blocks (block header, byte count, and data field) and then passes it on to TCP.
  • Compressed mode: Data is compressed using a single algorithm (usually run-length ecording).