0
0
mirror of https://codeberg.org/crimeflare/cloudflare-tor synced 2024-12-12 11:27:17 +01:00
Go to file
2020-12-07 08:07:45 +01:00
image Delete 'image/6.jpg' 2020-11-23 05:32:01 +01:00
README.md Update 'README.md' 2020-12-07 08:07:45 +01:00

Read the paragraph below and answer the questions that follow. Write your answer in your notebook. You have 30 minutes.

Cloudflare, Inc. is an American web-infrastructure and website-security company, providing content-delivery-network services, DDoS mitigation, Internet security, and distributed domain-name-server services. Cloudflare's services sit between a website's visitor and the Cloudflare user's hosting provider, acting as a reverse proxy for websites. Cloudflare's headquarters are in San Francisco. History Cloudflare was created in 2009 by Matthew Prince, Lee Holloway, and Michelle Zatlyn. It received media attention in June 2011 for providing security services to the website of LulzSec, a black hat hacking group. Cloudflare acts as a reverse proxy for web traffic. Cloudflare supports web protocols, including SPDY and HTTP/2. In addition to this, Cloudflare offers support for HTTP/2 Server Push. From 2009, the company was venture-capital funded. On August 15, 2019, Cloudflare submitted its S-1 filing for IPO on the New York Stock Exchange under the stock ticker NET. It opened for public trading on September 13, 2019, priced at $15 per share. In February 2014, Cloudflare mitigated what was at the time the largest ever recorded DDoS attack, which peaked at 400 Gigabits per second against an undisclosed customer. In November 2014, Cloudflare reported another massive DDoS attack with independent media sites being targeted at 500 Gbit/s. In March 2013, the company defended The Spamhaus Project from a DDoS attack that exceeded 300 Gbit/s. Akamai's chief architect stated that at the time it was "the largest publicly announced DDoS attack in the history of the Internet". Cloudflare has also reportedly absorbed attacks that have peaked over 400Gbit/s from an NTP Reflection attack. In 2014, Cloudflare introduced an effort called Project Galileo in response to cyberattacks against vulnerable online targets, such as artists, activists, journalists, and human rights groups. Project Galileo provides such groups with free services to protect their websites. In 2019, Cloudflare announced that 600 users and organizations were participating in the project. On April 1, 2019, Cloudflare announced a new freemium Virtual Private Network service named WARP. The service would initially be available through the 1.1.1.1 mobile apps with a desktop app available later. 1.1.1.1 is a free Domain Name System (DNS) service. The public DNS service and servers are maintained and owned by Cloudflare in partnership with APNIC. The service functions as a recursive name server providing domain name resolution for any host on the Internet. The service was announced on April 1, 2018, and is claimed by Cloudflare to be "the Internet's fastest, privacy-first consumer DNS service". On November 11, 2018, Cloudflare announced a mobile application of their 1.1.1.1 service for Android and iOS. On September 25, 2019, Cloudflare released WARP, an upgraded version of their original 1.1.1.1 mobile application. According to DNSPerf, 1.1.1.1 is the world's fastest recursive DNS resolver, beating other popular resolvers such as Google's Public DNS resolver. The 1.1.1.1 DNS service operates recursive name servers for public use at the four following IP addresses. The addresses are mapped to the nearest operational server by anycast routing. The DNS service is also available for Tor clients. Users can set up the service by manually changing their DNS resolvers to the IP addresses below. Mobile users on both Android and iPhone have the alternative of downloading the 1.1.1.1 mobile application, which automatically configures the DNS resolvers on the device. 1.1.1.1 is successful as a recursive DNS resolver because of Cloudflare's vast network. Cloudflare runs an authoritative DNS resolver with a network of over 20 million Internet properties. With the recursor and the resolver on the same network, DNS queries can be answered faster than the existing resolvers. With the release of the 1.1.1.1 mobile application in November 2018, Cloudflare added the ability for users to encrypt their DNS queries over HTTPS (DoH) or TLS (DoT). Later on, WARP was implemented using a new protocol, WireGuard, which acts as a hyper-efficient VPN tunnel. Exposure of abuse Technology websites noted that by using 1.1.1.1 as the IP address for its service, Cloudflare exposed misconfigurations in existing setups that violated Internet standards (such as RFC1918). 1.1.1.1 was not a reserved IP address, yet was abused by many existing routers (mostly those sold by Cisco Systems) and companies for hosting login pages to private networks, exit pages or other purposes, rendering the proper routing of 1.1.1.1 impossible on those systems. Additionally, 1.1.1.1 is blocked on many networks and by multiple ISPs because the simplicity of the address means that it was previously often used inappropriately for testing purposes and not legitimate use. These previous uses have led to a huge influx of garbage data to Cloudflare's servers. Cleanup of 1.1.1.1 and 1.0.0.1 The 1.0.0.0/8 IP block was assigned in 2010 to APNIC; before this time it was unassigned space. An unassigned IP space, however is not the same as a reserved IP space for private use (called a reserved IP address). For example, AT&T has said it is working on fixing this issue within its CPE hardware. 1.1.1.1 was built as an alternative to the default DNS resolvers from Internet Service Providers (ISP). Since every internet query needs to go through a DNS resolver in order to translate a text-based web address to a numerical IP address, DNS resolvers hold a lot of data on its users. Owners of DNS services, such as an ISP, are able to track exactly what websites a user visits. Cloudflare's 1.1.1.1 markets itself as the antithesis to these traditional DNS resolvers. The service claims to adhere to the following privacy principles: Don't write user-identifiable log data to disk; Will never sell browsing data or use it in any way to target users with advertising data; The end user is not required to provide any personal information — not their name, phone number, or email address — in order to use the 1.1.1.1 App with Warp; and Will regularly hire outside auditors to ensure the service is living up to these promises. On April 1, 2019, Cloudflare announced they were planning to launch a VPN service called WARP which would be built into the 1.1.1.1 mobile app. The standard service would be provided for free, with a paid tier which includes additional features. The service was announced in April and millions of people signed up to get WARP. In the following months, Cloudflare did not release any updates or announcements on WARP, which led to frustrations and negative feedback. After much anticipation, Cloudflare released WARP to the public on September 25, 2019. WARP is an update to Cloudflare's existing 1.1.1.1 mobile application. People can use the application in 3 different modes: 1.1.1.1 uses Cloudflare's public DNS resolver to encrypt DNS queries over DoH and DoT. 1.1.1.1 with WARP allows users to encrypt all their mobile traffic instead of just the DNS queries. WARP+ is the paid premium tier of the service. Using Cloudflare's Argo Smart Routing, WARP+ routes your traffic through the "Internet fast-lane", which claims to make websites load 30% faster on average. On September 25, 2019, Cloudflare released WARP to the public. The beta for macOS and Windows was announced on April 1, 2020. Products DDoS Protection Cloudflare provides DDoS mitigation services which protect customers from distributed denial of service (DDoS) attacks. As of September 2020, the company claims to block "an average of 72 billion threats per day, including some of the largest DDoS attacks in history." On September 6, 2019, Wikipedia became the victim of a DDoS attack. European users were unable to access Wikipedia for several hours. The attack was mitigated after Wikimedia network engineers used Cloudflare's network and DDoS protection services to re-route and filter internet traffic. The specific Cloudflare product used was Magic Transit. Content Distribution Network Cloudflare offers a popular Content Distribution Network (CDN) service. The company launched in 2010 and TechCrunch wrote that their goal was to be "a CDN for the masses." Ten years later, the company claimed to support over 25 million internet websites. Controversies Cloudflare has faced several controversies over its unwillingness to monitor content distributed via its network—a stance it has defended based on the principle of free speech. Cloudflare stated that it will "continue to abide by the law" and "serve all customers", further explaining "our proper role is not that of Internet censor". These controversies have involved Cloudflare's policy of content neutrality and subsequent usage of its services by numerous contentious websites, including The Daily Stormer and 8chan, an imageboard which has been linked to multiple mass shootings in the United States and the Christchurch mosque shootings in New Zealand. Under public pressure, Cloudflare terminated services to The Daily Stormer in 2017 and to 8chan following the 2019 El Paso shooting. Cloudflare has come under pressure on multiple occasions due to its policies and for refusing to cease technical support (such as DNS routing and DDoS mitigation) of websites such as LulzSec, The Daily Stormer, and 8chan. Some have argued Cloudflare's services allow access to content which spreads hate and has led to harm and deaths. However Cloudflare, as an Internet infrastructure provider, has broad legal immunity from the content produced by its users. Cloudflare provided DNS routing and DoS protection for the white supremacist and neo-Nazi website, The Daily Stormer. In 2017 Cloudflare stopped providing their services to The Daily Stormer after an announcement on the controversial website asserted that the "upper-echelons" of Cloudflare were "secretly supporters of their ideology". Previously Cloudflare had refused to take any action regarding The Daily Stormer. As a self-described "free speech absolutist", Cloudflare's CEO Matthew Prince, in a blog post, vowed never to succumb to external pressure again and sought to create a "political umbrella" for the future. Prince further addressed the dangers of large companies deciding what is allowed to stay online, a concern that is shared by a number of civil liberties groups and privacy experts. The Electronic Frontier Foundation, a US digital rights group, said that services such as Cloudflare "should not be adjudicating what speech is acceptable", adding that "when illegal activity, like inciting violence or defamation, occurs, the proper channel to deal with it is the legal system." The Huffington Post alleges that Cloudflare provides services to "at least 7 terrorist groups", as designated by the United States Department of State including the Taliban, Hamas, and the al-Quds Brigades, and have been aware since at least 2012, and have taken no action. However, according to Cloudflare's CEO, no law enforcement agency has asked the company to discontinue these services. In 2019, Cloudflare was criticized for providing services to the discussion and imageboard 8chan, which allows users to post and discuss any content with minimal interference from site administrators. The message board has been linked to mass shootings in the United States and the Christchurch mosque shootings in New Zealand. In addition, a number of news organizations including The Washington Post and The Daily Dot have reported the existence of child pornography and child sexual abuse discussion boards. A Cloudflare representative has been quoted by the BBC saying that the platform "does not host the referenced websites, cannot block websites, and is not in the business of hiding companies that host illegal content". In an August 3 interview with The Guardian, immediately following the 2019 El Paso shooting, CEO Matthew Prince defended Cloudflare's support of 8chan, stating that he had a "moral obligation" to keep the site online. In August 2019, Cloudflare terminated services to 8chan, an American imageboard, after the perpetrator of the 2019 El Paso shooting allegedly used the website to upload his manifesto. Cloudflare services have been used by Rescator, a carding website that sells stolen payment card data. Two of the top three online chat forums belonging to the Islamic State of Iraq and the Levant (ISIL) are guarded by Cloudflare. According to Prince, U.S. law enforcement has not asked Cloudflare to discontinue the service, and they have not chosen to do so themselves. In November 2015, hacktivist group Anonymous discouraged the use of Cloudflare's services following the ISIL attacks in Paris and the renewed accusation that Cloudflare aids terrorists. Cloudflare responded by calling the group "15-year-old kids in Guy Fawkes masks", and saying that whenever such concerns are raised they consult anti-terrorism experts and abide by the law. In late 2019, Cloudflare was criticized for providing services to the anti-black website Chimpmania. Hundreds of thousands signed a petition on Change.org urging Prince to terminate services to Chimpmania. The petition was created by the parents of a biracial baby who was born with gastroschisis and who was mocked as a “mulatto monkey baby” by site users, and whose pictures were posted on the site. Over the ten years the site has been active, numerous other petitions have also been leveled against it, none of which were successful. Security and privacy The hacker group UGNazi attacked Cloudflare partially by exploiting flaws in Google's authentication systems in June 2012, gaining administrative access to Cloudflare and using it to deface 4chan. From September 2016 until February 2017, a major Cloudflare bug (nicknamed Cloudbleed) leaked sensitive data, including passwords and authentication tokens, from customer websites by sending extra data in response to web requests. The leaks resulted from a buffer overflow which occurred, according to analysis by Cloudflare, on approximately 1 in every 3,300,000 HTTP requests. In May 2017, ProPublica reported that Cloudflare as a matter of policy relays the names and email addresses of persons complaining about hate sites to the sites in question, which has led to the complainants being harassed. Cloudflare's general counsel defended the company's policies by saying it is "base constitutional law that people can face their accusers". In response to the report, Cloudflare updated their abuse reporting process to provide greater control over who is notified of the complaining party. Cloudflare is cited in reports by The Spamhaus Project, an international spam tracking organization, due to high numbers of cybercriminal botnet operations 'hosted' on Cloudflare services. An October 2015 report found that Cloudflare provisioned 40% of SSL certificates used by phishing sites with deceptive domain names resembling those of banks and payment processors. Cloudflare suffered a major outage on July 2, 2019, which rendered more than 12 million websites (80% of all customers) unreachable for 27 minutes. A similar outage occurred on July 17, 2020, causing a similar effect and impacting the same amount of sites. Website defacement is an attack on a website that changes the visual appearance of a website or a web page. These are typically the work of defacers, who break into a web server and replace the hosted website with one of their own. Defacement is generally meant as a kind of electronic graffiti and, as other forms of vandalism, is also used to spread messages by politically motivated "cyber protesters" or hacktivists . Methods such as a web shell may be used to aid in website defacement. Religious and government sites are regularly targeted by hackers in order to display political or religious beliefs, whilst defacing the views and beliefs of others. Disturbing images and offensive phrases might be displayed in the process, as well as a signature of sorts, to show who was responsible for the defacement. Websites are not only defaced for political reasons; many defacers do it just for the thrill. For example, there are online contests in which hackers are awarded points for defacing the largest number of web sites in a specified amount of time. Corporations are also targeted more often than other websites on the World Wide Web and they often seek to take measures to protect themselves from defacement or hacking in general. Websites represent the image of a company or organisation and these are therefore suffer significant losses due to defacement. Visitors may lose faith in sites that cannot promise security and will become wary of performing online transactions. After defacement, sites have to be shut down for repairs and security review, sometimes for an extended period of time, causing expenses and loss of profit and value. Internet censorship is the control or suppression of what can be accessed, published, or viewed on the Internet enacted by regulators, or on their own initiative. Individuals and organizations may engage in self-censorship for moral, religious, or business reasons, to conform to societal norms, due to intimidation, or out of fear of legal or other consequences. The extent of Internet censorship varies on a country-to-country basis. While some democratic countries have moderate Internet censorship, other countries go as far as to limit the access of information such as news and suppress discussion among citizens. Internet censorship also occurs in response to or in anticipation of events such as elections, protests, and riots. An example is the increased censorship due to the events of the Arab Spring. Other types of censorship include the use of copyrights, defamation, harassment, and obscene material claims as a way to suppress content. Support for and opposition to Internet censorship also varies. In a 2012 Internet Society survey 71% of respondents agreed that "censorship should exist in some form on the Internet". In the same survey 83% agreed that "access to the Internet should be considered a basic human right" and 86% agreed that "freedom of expression should be guaranteed on the Internet". Perception of internet censorship in the US is largely based on the First Amendment and the right for expansive free speech and access to content without regard to the consequences. According to GlobalWebIndex, over 400 million people use virtual private networks to circumvent censorship or for increased user privacy. Many of the challenges associated with Internet censorship are similar to those for offline censorship of more traditional media such as newspapers, magazines, books, music, radio, television, and film. One difference is that national borders are more permeable online: residents of a country that bans certain information can find it on websites hosted outside the country. Thus censors must work to prevent access to information even though they lack physical or legal control over the websites themselves. This in turn requires the use of technical censorship methods that are unique to the Internet, such as site blocking and content filtering. Views about the feasibility and effectiveness of Internet censorship have evolved in parallel with the development of the Internet and censorship technologies: A 1993 Time Magazine article quotes computer scientist John Gilmore, one of the founders of the Electronic Frontier Foundation, as saying "The Net interprets censorship as damage and routes around it." In November 2007, "Father of the Internet" Vint Cerf stated that he sees government control of the Internet failing because the Web is almost entirely privately owned. A report of research conducted in 2007 and published in 2009 by the Berkman Center for Internet & Society at Harvard University stated that: "We are confident that the [ censorship circumvention ] tool developers will for the most part keep ahead of the governments' blocking efforts", but also that "...we believe that less than two percent of all filtered Internet users use circumvention tools." In contrast, a 2011 report by researchers at the Oxford Internet Institute published by UNESCO concludes "... the control of information on the Internet and Web is certainly feasible, and technological advances do not therefore guarantee greater freedom of speech." Blocking and filtering can be based on relatively static blacklists or be determined more dynamically based on a real-time examination of the information being exchanged. Blacklists may be produced manually or automatically and are often not available to non-customers of the blocking software. Blocking or filtering can be done at a centralized national level, at a decentralized sub-national level, or at an institutional level, for example in libraries, universities or Internet cafes. Blocking and filtering may also vary within a country across different ISPs. Countries may filter sensitive content on an ongoing basis and/or introduce temporary filtering during key time periods such as elections. In some cases the censoring authorities may surreptitiously block content to mislead the public into believing that censorship has not been applied. This is achieved by returning a fake "Not Found" error message when an attempt is made to access a blocked website. Unless the censor has total control over all Internet-connected computers, such as in North Korea (who employs an intranet that only privileged citizens can access), or Cuba, total censorship of information is very difficult or impossible to achieve due to the underlying distributed technology of the Internet. Pseudonymity and data havens (such as Freenet) protect free speech using technologies that guarantee material cannot be removed and prevents the identification of authors. Technologically savvy users can often find ways to access blocked content. Nevertheless, blocking remains an effective means of limiting access to sensitive information for most users when censors, such as those in China, are able to devote significant resources to building and maintaining a comprehensive censorship system. The term "splinternet" is sometimes used to describe the effects of national firewalls. The verb "rivercrab" colloquially refers to censorship of the Internet, particularly in Asia. Technical censorship Various parties are using different technical methods of preventing public access to undesirable resources, with varying levels of effectiveness, costs and side effects. Blacklists Entities mandating and implementing the censorship usually identify them by one of the following items: keywords, domain names and IP addresses. Lists are populated from different sources, ranging from private suppliers through courts to specialized government agencies (Ministry of Industry and Information Technology of China, Islamic Guidance in Iran). As per Hoffmann, different methods are used to block certain websites or pages including DNS poisoning, blocking access to IPs, analyzing and filtering URLs, inspecting filter packets and resetting connections. Points of control Enforcement of the censor-nominated technologies can be applied at various levels of countries and Internet infrastructure: Internet backbone, including Internet exchange points (IXP) with international networks (Autonomous Systems), operators of submarine communications cables, satellite Internet access points, international optical fibre links etc. In addition to facing huge performance challenges due to large bandwidths involved, these do not give censors access to information exchanged within the country. Internet Service Providers, which involves installation of voluntary (as in UK) or mandatory (as in Russia) Internet surveillance and blocking equipment. Individual institutions, which in most cases implement some form of Internet access controls to enforce their own policies, but, especially in case of public or educational institutions, may be requested or coerced to do this on the request from the government. Personal devices, whose manufacturers or vendors may be required by law to install censorship software. Application service providers (e.g. social media companies), who may be legally required to remove particular content. Foreign providers with business presence in given country may be also coerced into restricting access to specific contents for visitors from the requesting country. Certificate authorities may be required to issue counterfeit X.509 certificates controlled by the government, allowing man-in-the-middle surveillance of TLS encrypted connections. Content Delivery Network providers who tend to aggregate large amounts of content (e.g. images) may be also attractive target for censorship authorities. Approaches Internet content is subject to technical censorship methods, including: Internet Protocol (IP) address blocking: Access to a certain IP address is denied. If the target Web site is hosted in a shared hosting server, all websites on the same server will be blocked. This affects IP-based protocols such as HTTP, FTP and POP. A typical circumvention method is to find proxies that have access to the target websites, but proxies may be jammed or blocked, and some Web sites, such as Wikipedia (when editing), also block proxies. Some large websites such as Google have allocated additional IP addresses to circumvent the block, but later the block was extended to cover the new addresses. Due to challenges with geolocation, geo-blocking is normally implemented via IP address blocking. Domain name system (DNS) filtering and redirection: Blocked domain names are not resolved, or an incorrect IP address is returned via DNS hijacking or other means. This affects all IP-based protocols such as HTTP, FTP and POP. A typical circumvention method is to find an alternative DNS resolver that resolves domain names correctly, but domain name servers are subject to blockage as well, especially IP address blocking. Another workaround is to bypass DNS if the IP address is obtainable from other sources and is not itself blocked. Examples are modifying the Hosts file or typing the IP address instead of the domain name as part of a URL given to a Web browser. Uniform Resource Locator (URL) filtering: URL strings are scanned for target keywords regardless of the domain name specified in the URL. This affects the HTTP protocol. Typical circumvention methods are to use escaped characters in the URL, or to use encrypted protocols such as VPN and TLS/SSL. Packet filtering: Terminate TCP packet transmissions when a certain number of controversial keywords are detected. This affects all TCP-based protocols such as HTTP, FTP and POP, but Search engine results pages are more likely to be censored. Typical circumvention methods are to use encrypted connections such as VPN and TLS/SSL to escape the HTML content, or by reducing the TCP/IP stack's MTU/MSS to reduce the amount of text contained in a given packet. Connection reset: If a previous TCP connection is blocked by the filter, future connection attempts from both sides can also be blocked for some variable amount of time. Depending on the location of the block, other users or websites may also be blocked, if the communication is routed through the blocking location. A circumvention method is to ignore the reset packet sent by the firewall. Network disconnection: A technically simpler method of Internet censorship is to completely cut off all routers, either by software or by hardware (turning off machines, pulling out cables). A circumvention method could be to use a satellite ISP to access Internet. Portal censorship and search result removal: Major portals, including search engines, may exclude web sites that they would ordinarily include. This renders a site invisible to people who do not know where to find it. When a major portal does this, it has a similar effect as censorship. Sometimes this exclusion is done to satisfy a legal or other requirement, other times it is purely at the discretion of the portal. For example, Google.de and Google.fr remove Neo-Nazi and other listings in compliance with German and French law. Computer network attacks: Denial-of-service attacks and attacks that deface opposition websites can produce the same result as other blocking techniques, preventing or limiting access to certain websites or other online services, although only for a limited period of time. This technique might be used during the lead up to an election or some other sensitive period. It is more frequently used by non-state actors seeking to disrupt services. See also: Internet forum § Word censor, and Anti-spam techniques § Detecting spam Over and under blocking Technical censorship techniques are subject to both over- and under-blocking since it is often impossible to always block exactly the targeted content without blocking other permissible material or allowing some access to targeted material and so providing more or less protection than desired. An example is blocking an IP-address of a server that hosts multiple websites, which prevents access to all of the websites rather than just those that contain content deemed offensive. Use of commercial filtering software Screenshot of Websense blocking Facebook in an organization where it has been configured to block a category named "Personals and Dating" Main article: Content-control software Writing in 2009 Ronald Deibert, professor of political science at the University of Toronto and co-founder and one of the principal investigators of the OpenNet Initiative, and, writing in 2011, Evgeny Morzov, a visiting scholar at Stanford University and an Op-Ed contributor to the New York Times, explain that companies in the United States, Finland, France, Germany, Britain, Canada, and South Africa are in part responsible for the increasing sophistication of online content filtering worldwide. While the off-the-shelf filtering software sold by Internet security companies are primarily marketed to businesses and individuals seeking to protect themselves and their employees and families, they are also used by governments to block what they consider sensitive content. Among the most popular filtering software programs is SmartFilter by Secure Computing in California, which was bought by McAfee in 2008. SmartFilter has been used by Tunisia, Saudi Arabia, Sudan, the UAE, Kuwait, Bahrain, Iran, and Oman, as well as the United States and the UK. Myanmar and Yemen have used filtering software from Websense. The Canadian-made commercial filter Netsweeper is used in Qatar, the UAE, and Yemen. The Canadian organization CitizenLab has reported that Sandvine and Procera products are used in Turkey and Egypt. On 12 March 2013 in a Special report on Internet Surveillance, Reporters Without Borders named five "Corporate Enemies of the Internet": Amesys (France), Blue Coat Systems (U.S.), Gamma (UK and Germany), Hacking Team (Italy), and Trovicor (Germany). The companies sell products that are liable to be used by governments to violate human rights and freedom of information. RWB said that the list is not exhaustive and will be expanded in the coming months. In a U.S. lawsuit filed in May 2011, Cisco Systems is accused of helping the Chinese Government build a firewall, known widely as the Golden Shield, to censor the Internet and keep tabs on dissidents. Cisco said it had made nothing special for China. Cisco is also accused of aiding the Chinese government in monitoring and apprehending members of the banned Falun Gong group. Many filtering programs allow blocking to be configured based on dozens of categories and sub-categories such as these from Websense: "abortion" (pro-life, pro-choice), "adult material" (adult content, lingerie and swimsuit, nudity, sex, sex education), "advocacy groups" (sites that promote change or reform in public policy, public opinion, social practice, economic activities, and relationships), "drugs" (abused drugs, marijuana, prescribed medications, supplements and unregulated compounds), "religion" (non-traditional religions occult and folklore, traditional religions), .... The blocking categories used by the filtering programs may contain errors leading to the unintended blocking of websites. The blocking of Dailymotion in early 2007 by Tunisian authorities was, according to the OpenNet Initiative, due to Secure Computing wrongly categorizing Dailymotion as pornography for its SmartFilter filtering software. It was initially thought that Tunisia had blocked Dailymotion due to satirical videos about human rights violations in Tunisia, but after Secure Computing corrected the mistake access to Dailymotion was gradually restored in Tunisia. Organizations such as the Global Network Initiative, the Electronic Frontier Foundation, Amnesty International, and the American Civil Liberties Union have successfully lobbied some vendors such as Websense to make changes to their software, to refrain from doing business with repressive governments, and to educate schools who have inadvertently reconfigured their filtering software too strictly. Nevertheless, regulations and accountability related to the use of commercial filters and services are often non-existent, and there is relatively little oversight from civil society or other independent groups. Vendors often consider information about what sites and content is blocked valuable intellectual property that is not made available outside the company, sometimes not even to the organizations purchasing the filters. Thus by relying upon out-of-the-box filtering systems, the detailed task of deciding what is or is not acceptable speech may be outsourced to the commercial vendors. Non-technical censorship Main article: Censorship PDF about countries that criminalize free speech Internet content is also subject to censorship methods similar to those used with more traditional media. For example: Laws and regulations may prohibit various types of content and/or require that content be removed or blocked either proactively or in response to requests. Publishers, authors, and ISPs may receive formal and informal requests to remove, alter, slant, or block access to specific sites or content. Publishers and authors may accept bribes to include, withdraw, or slant the information they present. Publishers, authors, and ISPs may be subject to arrest, criminal prosecution, fines, and imprisonment. Publishers, authors, and ISPs may be subject to civil lawsuits. Equipment may be confiscated and/or destroyed. Publishers and ISPs may be closed or required licenses may be withheld or revoked. Publishers, authors, and ISPs may be subject to boycotts. Publishers, authors, and their families may be subject to threats, attacks, beatings, and even murder. Publishers, authors, and their families may be threatened with or actually lose their jobs. Individuals may be paid to write articles and comments in support of particular positions or attacking opposition positions, usually without acknowledging the payments to readers and viewers. Censors may create their own online publications and Web sites to guide online opinion. Access to the Internet may be limited due to restrictive licensing policies or high costs. Access to the Internet may be limited due to a lack of the necessary infrastructure, deliberate or not. Access to search results may be restricted due to government involvement in the censorship of specific search terms, content may be excluded due to terms set with search engines. By allowing search engines to operate in new territory they must agree to abide to censorship standards set by the government in that country. Censorship of users by web service operators Removal of user accounts based on controversial content Main article: Deplatforming Deplatforming is a form of Internet censorship in which controversial speakers or speech are suspended, banned, or otherwise shut down by social media platforms and other service providers that generally provide a venue for free speech or expression. Banking and financial service providers, among other companies, have also denied services to controversial activists or organizations, a practice known as "financial deplatforming". Law professor Glenn Reynolds dubbed 2018 the "Year of Deplatforming", in an August 2018 article in The Wall Street Journal. According to Reynolds, in 2018 "the internet giants decided to slam the gates on a number of people and ideas they don't like. If you rely on someone else's platform to express unpopular ideas, especially ideas on the right, you're now at risk." On 6 August 2018, for example, several major platforms, including YouTube and Facebook, executed a coordinated, permanent ban on all accounts and media associated with conservative talk show host Alex Jones and his media platform InfoWars, citing "hate speech" and "glorifying violence." Reynolds also cited Gavin McInnes and Dennis Prager as prominent 2018 targets of deplatforming based on their political views, noting, "Extremists and controversialists on the left have been relatively safe from deplatforming." Official statements regarding site and content removal See also: Twitter suspensions and Terms of Service Most major web service operators reserve to themselves broad rights to remove or pre-screen content, and to suspend or terminate user accounts, sometimes without giving a specific list or only a vague general list of the reasons allowing the removal. The phrases "at our sole discretion", "without prior notice", and "for other reasons" are common in Terms of Service agreements. Facebook: Among other things, the Facebook Statement of Rights and Responsibilities says: "You will not post content that: is hateful, threatening, or pornographic; incites violence; or contains nudity or graphic or gratuitous violence", "You will not use Facebook to do anything unlawful, misleading, malicious, or discriminatory", "We can remove any content or information you post on Facebook if we believe that it violates this Statement", and "If you are located in a country embargoed by the United States, or are on the U.S. Treasury Department's list of Specially Designated Nationals you will not engage in commercial activities on Facebook (such as advertising or payments) or operate a Platform application or website". Google: Google's general Terms of Service, which were updated on 1 March 2012, state: "We may suspend or stop providing our Services to you if you do not comply with our terms or policies or if we are investigating suspected misconduct", "We may review content to determine whether it is illegal or violates our policies, and we may remove or refuse to display content that we reasonably believe violates our policies or the law", and "We respond to notices of alleged copyright infringement and terminate accounts of repeat infringers according to the process set out in the U.S. Digital Millennium Copyright Act". Google Search: Google's Webmaster Tools help includes the following statement: "Google may temporarily or permanently remove sites from its index and search results if it believes it is obligated to do so by law, if the sites do not meet Google's quality guidelines, or for other reasons, such as if the sites detract from users' ability to locate relevant information." Twitter: The Twitter Terms of Service state: "We reserve the right at all times (but will not have an obligation) to remove or refuse to distribute any Content on the Services and to terminate users or reclaim usernames" and "We reserve the right to remove Content alleged to be [copyright] infringing without prior notice and at our sole discretion". YouTube: The YouTube Terms of Service include the statements: "YouTube reserves the right to decide whether Content violates these Terms of Service for reasons other than copyright infringement, such as, but not limited to, pornography, obscenity, or excessive length. YouTube may at any time, without prior notice and in its sole discretion, remove such Content and/or terminate a user's account for submitting such material in violation of these Terms of Service", "YouTube will remove all Content if properly notified that such Content infringes on another's intellectual property rights", and "YouTube reserves the right to remove Content without prior notice". Wikipedia: Content within a Wikipedia article may be modified or deleted by any editor as part of the normal process of editing and updating articles. All editing decisions are open to discussion and review. The Wikipedia Deletion policy outlines the circumstances in which entire articles can be deleted. Any editor who believes a page doesn't belong in an encyclopedia can propose its deletion. Such a page can be deleted by any administrator if, after seven days, no one objects to the proposed deletion. Speedy deletion allows for the deletion of articles without discussion and is used to remove pages that are so obviously inappropriate for Wikipedia that they have no chance of surviving a deletion discussion. All deletion decisions may be reviewed, either informally or formally. Yahoo!: Yahoo!'s Terms of Service (TOS) state: "You acknowledge that Yahoo! may or may not pre-screen Content, but that Yahoo! and its designees shall have the right (but not the obligation) in their sole discretion to pre-screen, refuse, or remove any Content that is available via the Yahoo! Services. Without limiting the foregoing, Yahoo! and its designees shall have the right to remove any Content that violates the TOS or is otherwise objectionable." Internet censorship circumvention is the processes used by technologically savvy Internet users to bypass the technical aspects of Internet filtering and gain access to the otherwise censored material. Circumvention is an inherent problem for those wishing to censor the Internet because filtering and blocking do not remove content from the Internet, but instead block access to it. Therefore, as long as there is at least one publicly accessible uncensored system, it will often be possible to gain access to the otherwise censored material. However circumvention may not be possible by non-tech-savvy users, so blocking and filtering remain effective means of censoring the Internet access of large numbers of users. Different techniques and resources are used to bypass Internet censorship, including proxy websites, virtual private networks, sneakernets, the dark web and circumvention software tools. Solutions have differing ease of use, speed, security, and risks. Most, however, rely on gaining access to an Internet connection that is not subject to filtering, often in a different jurisdiction not subject to the same censorship laws. According to GlobalWebIndex, over 400 million people use virtual private networks to circumvent censorship or for an increased level of privacy. The majority of circumvention techniques are not suitable for day to day use. There are risks to using circumvention software or other methods to bypass Internet censorship. In some countries, individuals that gain access to otherwise restricted content may be violating the law and if caught can be expelled, fired, jailed, or subject to other punishments and loss of access. In June 2011 the New York Times reported that the U.S. is engaged in a "global effort to deploy 'shadow' Internet and mobile phone systems that dissidents can use to undermine repressive governments that seek to silence them by censoring or shutting down telecommunications networks." Another way to circumvent Internet censorship is to physically go to an area where the Internet is not censored. In 2017 a so-called "Internet refugee camp" was established by IT workers in the village of Bonako, just outside an area of Cameroon where the Internet is regularly blocked. An emerging technology, blockchain DNS is also challenging the status quo of the centralized infrastructure on the Internet. This is through a design principle of building a domain name system which is more decentralized and transparent. Blockchain, in layman terms, is a public ledger that records all events, transactions or exchanges that happen between parties (identified as nodes) in a network. Bitcoin popularized the concept of blockchain, but blockchain is a baseline platform that has far greater implications than just Bitcoin or cryptocurrencies. Blockchain domain names are entirely an asset of the domain owner and can only be controlled by the owner through a private key. Therefore, authorities cannot take down content or enforce shutdown of the domain. However the technology has its own flaws as one would need to install add-ons on a browser to be able to access blockchain domains. Increased use of HTTPS The use of HTTPS versus what originally was HTTP in web searches created greater accessibility to most sites originally blocked or heavily monitored. Many social media sites including, Facebook, Google, and Twitter have added an automatic redirection to HTTPS as of 2017. With the added adoption of HTTPS use, "censors" are left with limited options of either completely blocking all content or none of it. Sites that were blocked in Egypt began using sites such as Medium to get their content out due to the difficulty "censors" would have with blocking each piece of individual content. With the use of Medium, many users were able to get more available access within more heavily monitored countries. However, the site was blocked in several areas which caused millions of posts on the site to become un-accessible. An article written by Sarawak Report was one of the many articles blocked from the site. The article was blocked by the Malaysian Communications and Multimedia Commission (MCMC) due to what was deemed as a failure to comply with the requested removal of the post. The use of HTTPS does not inherently prevent the censorship of an entire domain, as the domain name is left unencrypted in the ClientHello of the TLS handshake. The Encrypted Client Hello TLS extension expands on HTTPS and encrypts the entire ClientHello but this depends on both client and server support. Common targets There are several motives or rationales for Internet filtering: politics and power, social norms and morals, and security concerns. Protecting existing economic interests is an additional emergent motive for Internet filtering. In addition, networking tools and applications that allow the sharing of information related to these motives are themselves subjected to filtering and blocking. And while there is considerable variation from country to country, the blocking of web sites in a local language is roughly twice that of web sites available only in English or other international languages. Politics and power Censorship directed at political opposition to the ruling government is common in authoritarian and repressive regimes. Some countries block web sites related to religion and minority groups, often when these movements represent a threat to the ruling regimes. Examples include: Political blogs and web sites Lèse majesté sites, sites with content that offends the dignity of or challenges the authority of a reigning sovereign or of a state Falun Gong and Tibetan exile group sites in China or Buddhist, Cao Dai faith, and indigenous hill tribes sites in Vietnam 50 Cent Party, or "50 Cent Army" that worked to sway negative public opinion on the Communist Party of China Russian web brigades Sites aimed at religious conversion from Islam to Christianity Sites criticizing the government or an authority in the country Sites that comment on political parties that oppose the current government of a country Sites that accuse authorities of corruption Sites that comment on minorities or LGBT issues Social norms Social filtering is censorship of topics that are held to be antithetical to accepted societal norms. In particular censorship of child pornography and to protect children enjoys very widespread public support and such content is subject to censorship and other restrictions in most countries. Examples include: Sites that include hate speech inciting racism, sexism, homophobia, or other forms of bigotry Sites seen as promoting illegal drug use (Erowid) Sex and erotic, fetishism, prostitution, and pornographic sites Child pornography and pedophile related sites (see also CIRCAMP) Gambling sites Sites encouraging or inciting violence Sites promoting criminal activity Communist symbols and imagery in Poland, Lithuania, Ukraine, Latvia, Moldova, and Hungary Nazi and similar websites, particularly in France and Germany Sites that contain blasphemous content, particularly when directed at a majority or state supported religion Sites that contain defamatory, slanderous, or libelous content Sites that include political satire Sites that contain information on social issues or "online protests, petitions and campaigns" Security concerns Many organizations implement filtering as part of a defense in depth strategy to protect their environments from malware, and to protect their reputations in the event of their networks being used, for example, to carry out sexual harassment. Internet filtering related to threats to national security that targets the Web sites of insurgents, extremists, and terrorists often enjoys wide public support. Examples include: Blocking of proNorth Korean sites by South Korea Blocking sites of groups that foment domestic conflict in India Blocking of sites of the Muslim Brotherhood in some countries in the Middle East Blocking WikiLeaks Blocking sites such as 4chan thought to be related to the group Anonymous Protection of existing economic interests and copyright The protection of existing economic interests is sometimes the motivation for blocking new Internet services such as low-cost telephone services that use Voice over Internet Protocol (VoIP). These services can reduce the customer base of telecommunications companies, many of which enjoy entrenched monopoly positions and some of which are government sponsored or controlled. Anti-copyright activists Christian Engström, Rick Falkvinge and Oscar Swartz have alleged that censorship of child pornography is being used as a pretext by copyright lobby organizations to get politicians to implement similar site blocking legislation against copyright-related piracy. Examples include: File sharing and peer-to-peer (P2P) related websites such as The Pirate Bay Skype Sites that sell or distribute music, but are not 'approved' by rights holders, such as allofmp3 Network tools Blocking the intermediate tools and applications of the Internet that can be used to assist users in accessing and sharing sensitive material is common in many countries. Examples include: Media sharing websites (e.g. Flickr and YouTube) Social networks (e.g. Facebook and Instagram) Translation sites and tools E-mail providers Web hosting sites Blog hosting sites such as Blogspot and Medium Microblogging sites such as Twitter and Weibo Wikipedia Censorship circumvention sites Anonymizers Proxy avoidance sites Search engines such as Bing and Google particularly in Mainland China and Cuba Information about individuals Main article: Right to be forgotten The right to be forgotten is a concept that has been discussed and put into practice in the European Union. In May 2014, the European Court of Justice ruled against Google in Costeja, a case brought by a Spanish man who requested the removal of a link to a digitized 1998 article in La Vanguardia newspaper about an auction for his foreclosed home, for a debt that he had subsequently paid. He initially attempted to have the article removed by complaining to Spain's data protection agency—Agencia Española de Protección de Datos—which rejected the claim on the grounds that it was lawful and accurate, but accepted a complaint against Google and asked Google to remove the results. Google sued in Spain and the lawsuit was transferred to the European Court of Justice. The court ruled in Costeja that search engines are responsible for the content they point to and thus, Google was required to comply with EU data privacy laws. It began compliance on 30 May 2014 during which it received 12,000 requests to have personal details removed from its search engine. Index on Censorship claimed that "Costeja ruling ... allows individuals to complain to search engines about information they do not like with no legal oversight. This is akin to marching into a library and forcing it to pulp books. Although the ruling is intended for private individuals it opens the door to anyone who wants to whitewash their personal history....The Courts decision is a retrograde move that misunderstands the role and responsibility of search engines and the wider internet. It should send chills down the spine of everyone in the European Union who believes in the crucial importance of free expression and freedom of information." Various contexts influence whether or not an internet user will be resilient to censorship attempts. Users are more resilient to censorship if they are aware that information is being manipulated. This awareness of censorship leads to users finding ways to circumvent it. Awareness of censorship also allows users to factor this manipulation into their belief systems. Knowledge of censorship also offers some citizens incentive to try to discover information that is being concealed. In contrast, those that lack awareness of censorship cannot easily compensate for information manipulation. Other important factors for censorship resiliency are the demand for the information being concealed, and the ability to pay the costs to circumvent censorship. Entertainment content is more resilient to online censorship than political content, and users with more education, technology access, and wider, more diverse social networks are more resilient to censorship attempts. As more people in more places begin using the Internet for important activities, there is an increase in online censorship, using increasingly sophisticated techniques. The motives, scope, and effectiveness of Internet censorship vary widely from country to country. The countries engaged in state-mandated filtering are clustered in three main regions of the world: east Asia, central Asia, and the Middle East/North Africa. Countries in other regions also practice certain forms of filtering. In the United States state-mandated Internet filtering occurs on some computers in libraries and K-12 schools. Content related to Nazism or Holocaust denial is blocked in France and Germany. Child pornography and hate speech are blocked in many countries throughout the world. In fact, many countries throughout the world, including some democracies with long traditions of strong support for freedom of expression and freedom of the press, are engaged in some amount of online censorship, often with substantial public support. Internet censorship in China is among the most stringent in the world. The government blocks Web sites that discuss the Dalai Lama, the 1989 crackdown on Tiananmen Square protesters, the banned spiritual practice Falun Gong, as well as many general Internet sites. The government requires Internet search firms and state media to censor issues deemed officially "sensitive," and blocks access to foreign websites including Facebook, Twitter, and YouTube. According to a study in 2014, censorship in China is used to muzzle those outside government who attempt to spur the creation of crowds for any reason—in opposition to, in support of, or unrelated to the government. The government allows the Chinese people to say whatever they like about the state, its leaders, or their policies, because talk about any subject unconnected to collective action is not censored. The value that Chinese leaders find in allowing and then measuring criticism by hundreds of millions of Chinese people creates actionable information for them and, as a result, also for academic scholars and public policy analysts. There are international bodies that oppose internet censorship, for example "Internet censorship is open to challenge at the World Trade Organization (WTO) as it can restrict trade in online services, a forthcoming study argues". International concerns Generally, national laws affecting content within a country only apply to services that operate within that country and do not affect international services, but this has not been established clearly by international case law. There are concerns that due to the vast differences in freedom of speech between countries, that the ability for one country to affect speech across the global Internet could have chilling effects. For example, Google had won a case at the European Court of Justice in September 2019 that ruled that the EU's right to be forgotten only applied to services within the EU, and not globally. But in a contrary decision in October 2019, the same court ruled that Facebook was required to globally comply with a takedown request made in relationship to defamatory material that was posted to Facebook by an Austrian that was libelous of another, which had been determined to be illegal under Austrian laws. The case created a problematic precedent that the Internet may become subject to regulation under the strictest national defamation laws, and would limit free speech that may be acceptable in other countries. Internet shutdowns Several governments have resorted to shutting down most or all Internet connections in the country. This appears to have been the case on 27 and 28 January 2011 during the 2011 Egyptian protests, in what has been widely described as an "unprecedented" internet block. About 3500 Border Gateway Protocol (BGP) routes to Egyptian networks were shut down from about 22:10 to 22:35 UTC 27 January. This full block was implemented without cutting off major intercontinental fibre-optic links, with Renesys stating on 27 January, "Critical European-Asian fiber-optic routes through Egypt appear to be unaffected for now." Internet shutdown in Iran Beginning on 17 November 2019, in response to the Iranian fuel protests, because of Internet censorship in Iran an internet shutdown reduced Iran Internet traffic in the country 5% of normal levels. Doug Madory, the director of Internet analysis at Oracle, has described the operation as "unusual in its scale" and way more advanced. Full blocks also occurred in Myanmar/Burma in 2007, Libya in 2011, Iran in 2019, and Syria during the Syrian civil war. Almost all Internet connections in Sudan were disconnected from 3 June to 9 July 2019, in response to a political opposition sit-in seeking civilian rule. A near-complete shutdown in Ethiopia lasted for a week after the Amhara Region coup d'état attempt. A week-long shutdown in Mauritania followed disputes over the 2019 Mauritanian presidential election. Other country-wide shutdowns in 2019 include Zimbabwe after a gasoline price protests triggered police violence, Gabon during the 2019 Gabonese coup d'état attempt, and during or after elections in Democratic Republic of the Congo, Benin, Malawi, and Kazakhstan. Local shutdowns are frequently ordered in India during times of unrest and security concerns. Some countries have used localized Internet shutdowns to combat cheating during exams, including Iraq, Ethiopia, India, Algeria, and Uzbekistan. The Iranian government imposed a total internet shutdown from 16 to 23 November 2019, in response to the fuel protests. Doug Madory, the director of Internet analysis at Oracle, has described the operation as "unusual in its scale" and way more advanced. Beginning Saturday afternoon on 16 November 2019, the government of Iran ordered the disconnection of much of the country's internet connectivity as a response to widespread protests against the government's decision to raise gas prices. While Iran is no stranger to government-directed interference in its citizens access to the internet, this outage is notable in how it differs from past events. Unlike previous efforts at censorship and bandwidth throttling, the internet of Iran is presently experiencing a multi-day wholesale disconnection for much of its population arguably the largest such event ever for Iran. Reports, ratings, and trends World map showing the status of YouTube blocking Has local YouTube version Accessible Blocked Previously blocked Detailed country by country information on Internet censorship is provided by the OpenNet Initiative, Reporters Without Borders, Freedom House, and in the U.S. State Department Bureau of Democracy, Human Rights, and Labor's Human Rights Reports. The ratings produced by several of these organizations are summarized in the Internet censorship by country and the Censorship by country articles. OpenNet Initiative reports Through 2010 the OpenNet Initiative had documented Internet filtering by governments in over forty countries worldwide. The level of filtering in 26 countries in 2007 and in 25 countries in 2009 was classified in the political, social, and security areas. Of the 41 separate countries classified, seven were found to show no evidence of filtering in all three areas (Egypt, France, Germany, India, Ukraine, United Kingdom, and United States), while one was found to engage in pervasive filtering in all three areas (China), 13 were found to engage in pervasive filtering in one or more areas, and 34 were found to engage in some level of filtering in one or more areas. Of the 10 countries classified in both 2007 and 2009, one reduced its level of filtering (Pakistan), five increased their level of filtering (Azerbaijan, Belarus, Kazakhstan, South Korea, and Uzbekistan), and four maintained the same level of filtering (China, Iran, Myanmar, and Tajikistan). Freedom on the Net reports The Freedom on the Net reports from Freedom House provide analytical reports and numerical ratings regarding the state of Internet freedom for countries worldwide. The countries surveyed represent a sample with a broad range of geographical diversity and levels of economic development, as well as varying levels of political and media freedom. The surveys ask a set of questions designed to measure each country's level of Internet and digital media freedom, as well as the access and openness of other digital means of transmitting information, particularly mobile phones and text messaging services. Results are presented for three areas: Obstacles to Access, Limits on Content, and Violations of User Rights. The results from the three areas are combined into a total score for a country (from 0 for best to 100 for worst) and countries are rated as "Free" (0 to 30), "Partly Free" (31 to 60), or "Not Free" (61 to 100) based on the totals. Starting in 2009 Freedom House has produced nine editions of the report. There was no report in 2010. The reports generally cover the period from June through May. During the Arab Spring of 2011, media jihad (media struggle) was extensive. Internet and mobile technologies, particularly social networks such as Facebook and Twitter, played and are playing important new and unique roles in organizing and spreading the protests and making them visible to the rest of the world. An activist in Egypt tweeted, "we use Facebook to schedule the protests, Twitter to coordinate, and YouTube to tell the world". This successful use of digital media in turn led to increased censorship including the complete loss of Internet access for periods of time in Egypt and Libya in 2011. In Syria, the Syrian Electronic Army (SEA), an organization that operates with at least tacit support of the government, claims responsibility for defacing or otherwise compromising scores of websites that it contends spread news hostile to the Syrian government. SEA disseminates denial of service (DoS) software designed to target media websites including those of Al Jazeera, BBC News, Syrian satellite broadcaster Orient TV, and Dubai-based Al Arabiya TV. In response to the greater freedom of expression brought about by the Arab Spring revolutions in countries that were previously subject to very strict censorship, in March 2011, Reporters Without Borders moved Tunisia and Egypt from its "Internet enemies" list to its list of countries "under surveillance" and in 2012 dropped Libya from the list entirely. At the same time, there were warnings that Internet censorship might increase in other countries following the events of the Arab Spring. However, in 2013, Libyan communication company LTT blocked the pornographic websites. It even blocked the family-filtered videos of ordinary websites like Dailymotion.