Jump to content


Free Speech .

speech rights english enlightenment

  • Please log in to reply
466 replies to this topic

#461 Jeff


    Drum beating laughing boy

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 28,234 posts

Posted 30 October 2018 - 1817 PM

It's also worth noting how willing these companies are to integrate political concerns into their daily operating parameters.


Indeed, the idea that market considerations will keep them in check flies out the window when you realize that they are True Believers. They are also now at the point where they create their own market "weather" so to speak, just like when a wildfire gets to a certain size.

  • 0

#462 Ssnake


    Virtual Shiva Beast

  • Members
  • PipPip
  • 5,957 posts
  • Gender:Male
  • Location:Hannover, Germany
  • Interests:Contemporary armor - tactics and technology

Posted 30 October 2018 - 1832 PM

Again, even Tanknet is targeted and listed as a dangerous site even though we are all aware that it is not.


Most likely, that has to do with the lack of SSL encryption of the site. Google has released a Chrome update a while ago that flages every non https secured site as "dangerous" in the attempt to poke webmasters to finally adopt encryption. A not entirely disagreeabler step, if you asked me.

  • 0

#463 Jeff


    Drum beating laughing boy

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 28,234 posts

Posted 30 October 2018 - 1844 PM

The Electronic Committee of Public Safety
October 30, 2018 6:30 AM


Celebrities, politicians, and almost anyone of influence and wealth are always an incorrect or insensitive word away from the contemporary electronic guillotine. Regardless of the circumstances of their dilemmas, the beheaded rarely win sympathy from the mob. Coliseum-like roars of approval greet their abrupt change of fortune from their past exalted status.


So, for example, perhaps few feel sorry for anchor Megyn Kelly, recently all but fired by NBC and now walking away with most of her $69 million salary package as a severance payout.


Kelly was let go ostensibly for making a sloppy but not malicious morally equivalent comparison between whites at Halloween dressing up in costumes as blacks, and blacks likewise appearing as whites. But she sealed her fate by uttering the historically disparaging word “black face” as some sort of neutral bookend to her use of “white face.” Her fatal crime, then, was insensitive thought and speech and historical ignorance.


For someone so familiar with the rules of our electronic French Revolution and the felonies of speech and thought, Kelly proved surprisingly naïve in a variety of ways.


First, she should have known that there are revolutionary canons surrounding victimization indemnities. And for all her success, she is actually protected by few of them, given that she is fabulously well paid, attractive, still young, white — and at one time conservative and a former Fox News anchor person.


So when Kelly said something historically dense and insensitive, she should have grasped that she, despite being an emancipated coastal female, was immediately (and ratings-wise) expendable, even if expensively expendable.


Had Kelly been unapologetically progressive (especially one deemed vital to the cause), like Elizabeth Warren, who fabricated and profited from an entire minority identity, then she might well have survived the incident. Perhaps had she been a minority, such as Sarah Jeong, and written (rather than spoken off the cuff) far more racially offensive things about whites, she would have kept her job — as did Jeong on the New York Times editorial board after her racist tweets surfaced, such as this, from 2014: “Dumbass f****** white people marking up the internet with their opinions like dogs pissing on fire hydrants.”


Instead, Kelly was hauled to the electronic guillotine in a now familiar routine. An elite luminary (the mob has little concern with the thoughtcrimes of hoi polloi), sometimes even in sloppy and inadvertent rather than mean-spirited fashion, says something deemed illiberal or insensitive or even ideologically incorrect. Other elites in journalism, politics, and academia pounce and rush to social media to post their outrage in an endless internecine battle among (mostly white) virtue-signalers.


A competition ensues to prove who can play Robespierre best, by being either the most cleverly outraged or sincerely aggrieved, or the most vicariously victimized, or the most conniving to find advantage in the denunciation. A brief investigatory lull of a few minutes is often needed to sort out relative exemptions, much as Hébert and Danton, before heading to the guillotine, had their earlier revolutionary credentials nullified or recalibrated.


Millions of Internet sleuth volunteers play the 18th-century role of the shouting mob in the street, as they google the condemned person’s name in search of a prior quote from YouTube, Facebook, Twitter, or any random outlet that can prove a “pattern” of incorrect expression or counterrevolutionary behavior.


Within minutes, all sorts of earlier evidence of Kelly’s alleged illiberal or incorrect thought reappeared. In Kelly’s brief electronic trial, we were told in a nanosecond that she had once claimed that the Jewish Jesus was white and that St. Nicholas (the precursor to our cherubic Germanic Santa Claus) was as well — a mortal sin given that later Western representations of both as northern Europeans were cultural misappropriations of their Mediterranean Jewish and likely Greek identities. Thus, within an hour or so, a telltale fingerprint of Kelly’s supposedly longtime racism was discovered by the collective crime lab. At that point, the only suspense left was her small odds of escaping to some sort of victim refuge — at least beyond being a wealthy privileged white female in a wealthy white privileged male world of network news.


After the doomed wrongdoer is formally rounded up on the Internet, charged, and condemned, he or she begs the inquisitors for forgiveness. Tears and physical signs of real regret occasionally attest to weakness and are further proof of crimes to be punished; the contrition is never enough to earn forgiveness or magnanimity.


So Megyn Kelly confessed: “I realize now that such behavior is indeed wrong, and I am sorry. The history of blackface in our culture is abhorrent, the wounds too deep.”


A final reprieve is sometimes found in expressing a desire to enter revolutionary reeducation camp or at least correct-thought remediation. Next, Kelly threw herself on the mercy of her accusers by confessing her ignorance of the history of minstrel stereotyping: “I learned that, given the history of blackface being used in awful ways by racists in this country, it is not okay for that to be part of any costume, Halloween or otherwise.”


By contrast, recall the recent case of former astronaut Scott Kelly. When he tweeted admiration for Winston Churchill, exposing his felonious ignorance of Churchill’s crimes, he was almost virtually guillotined. He escaped the wrath of the Internet mob by swearing that he would reexamine the hitherto unknown dark side of Winston Churchill and thereby reeducate himself about Churchill’s mortal sins of colonialism and imperialism: “My apologies. I will go and educate myself further on his atrocities, racist views which I do not support.” And so he escaped the “national razor.” No doubt the twitter and Facebook mob posed a greater peril to Kelly’s well-being than having been strapped to a volatile, fuel-laden rocket and blasted into outer space.


At some point, an employer or high official, then, like a French revolutionary judge, weighs in with the condemnatory sentence, most fearful that ordering anything less than a trip to the ultimate barber is a window into his own dark and counterrevolutionary soul.


So the trick is for the boss or the corporation — in this case NBC chairman Andy Lack —to voice “shock” and “dismay” and to do so in such terms to ensure to the revolutionary mob that the miscreant most certainly did not learn such racist, insensitive, sexist, or subversive views from his or her superiors. Often the key is to denounce the accused in even stronger terms than the initial accusers’ and thereby not be the next head to drop in the basket. Lack proved equal to the challenge, intoning: “There is no other way to put this, but I condemn those remarks. There is no place on our air or in this workplace for them. Very unfortunate.”

Once fired and humiliated, the person is erased for a time from our revolutionary memories (we suddenly could not easily buy Garrison Keillor’s books, and Paula Deen seemed to vanish from television). Megyn Kelly will probably go into opulent seclusion and find herself disinvited from ceremonial appearances and speaking events, guillotined as a racist, with no more sympathy than a once privileged, beheaded Bourbon.


We now fear the lethal wrath of the Internet’s Committee of Public Safety. But beware of fickle revolutionary temperament. Soon our 21st-century Robespierres may become so promiscuous and obnoxious in their beheading that they wear out even the mob — and find themselves next in line on a counterrevolutionary chopping block.





  • 0

#464 Mr King

Mr King

    Putin's Personal Representative To Tanknet

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 18,066 posts
  • Gender:Male
  • Location:The Kremlin Of Course

Posted 16 January 2019 - 1735 PM

And nothing will happen, because Google and the rest of the techno corps are deeply in bed with the state. 


‘THE SMOKING GUN’: Google Manipulated YouTube Search Results for Abortion, Maxine Waters, David Hogg


In sworn testimony, Google CEO Sundar Pichai told Congress last month that his company does not “manually intervene” on any particular search result. Yet an internal discussion thread leaked to Breitbart News reveals Google regularly intervenes in search results on its YouTube video platform – including a recent intervention that pushed pro-life videos out of the top ten search results for “abortion.”
The term “abortion” was added to a “blacklist” file for “controversial YouTube queries,” which contains a list of search terms that the company considers sensitive. According to the leak, these include some of these search terms related to: abortion, abortions, the Irish abortion referendum, Democratic Congresswoman Maxine Waters, and anti-gun activist David Hogg.
The existence of the blacklist was revealed in an internal Google discussion thread leaked to Breitbart News by a source inside the company who wishes to remain anonymous. A partial list of blacklisted terms was also leaked to Breitbart by another Google source.
In the leaked discussion thread, a Google site reliability engineer hinted at the existence of more search blacklists, according to the source.
“We have tons of white- and blacklists that humans manually curate,” said the employee. “Hopefully this isn’t surprising or particularly controversial.”
Others were more concerned about the presence of the blacklist. According to the source, the software engineer who started the discussion called the manipulation of search results related to abortion a “smoking gun.”
The software engineer noted that the change had occurred following an inquiry from a left-wing Slate journalist about the prominence of pro-life videos on YouTube, and that pro-life videos were replaced with pro-abortion videos in the top ten results for the search terms following Google’s manual intervention.
“The Slate writer said she had complained last Friday and then saw different search results before YouTube responded to her on Monday,” wrote the employee. “And lo and behold, the [changelog] was submitted on Friday, December 14 at 3:17 PM.”
The manually downranked items included several videos from Dr. Antony Levatino, a former abortion doctor who is now a pro-life activist. Another video in the top ten featured a woman’s personal story of being pressured to have an abortion, while another featured pro-life conservative Ben Shapiro. The Slate journalist who complained to Google reported that these videos previously featured in the top ten, describing them in her story as “dangerous misinformation.”
Since the Slate journalist’s inquiry and Google’s subsequent intervention, the top search results now feature pro-abortion content from left-wing sources like BuzzFeed, Vice, CNN, and Last Week Tonight With John Oliver. In her report, the Slate journalist acknowledged that the search results changed shortly after she contacted Google.
The manual adjustment of search results by a Google-owned platform contradicts a key claim made under oath by Google CEO Sundar Pichai in his congressional testimony earlier this month: that his company does not “manually intervene on any search result.”
A Google employee in the discussion thread drew attention to Pichai’s claim, noting that it “seems like we are pretty eager to cater our search results to the social and political agenda of left-wing journalists.”
One of the posts in the discussion also noted that the blacklist had previously been edited to include the search term “Maxine Waters” after a single Google employee complained the top YouTube search result for Maxine Waters was “very low quality.”
Google’s alleged intervention on behalf of a Democratic congresswoman would be further evidence of the tech giant using its resources to prop up the left. Breitbart News previously reported on leaked emails revealing the company targeted pro-Democrat demographics in its get-out-the-vote efforts in 2016.
According to the source, a software engineer in the thread also noted that “a bunch of terms related to the abortion referendum in Ireland” had been added to the blacklist – another change with potentially dramatic consequences on the national policies of a western democracy.
At least one post in the discussion thread revealed the existence of a file called “youtube_controversial_query_blacklist,” which contains a list of YouTube search terms that Google manually curates. In addition to the terms “abortion,” “abortions,” “Maxine Waters,” and search terms related to the Irish abortion referendum, a Google software engineer noted that the blacklist includes search terms related to terrorist attacks. (the posts specifically mentions that the “Strasbourg terrorist attack” as being on the list).
“If you look at the other entries recently added to the youtube_controversial_query_blacklist(e.g., entries related to the Strasbourg terrorist attack), the addition of abortion seems…out-of-place,” wrote the software engineer, according to the source.
After learning of the existence of the blacklist, Breitbart News obtained a partial screenshot of the full blacklist file from a source within Google. It reveals that the blacklist includes search terms related to both mass shootings and the progressive anti-second amendment activist David Hogg.
This suggests Google has followed the lead of Democrat politicians, who have repeatedly pushed tech companies to censor content related to the Parkland school shooting and the Parkland anti-gun activists. It’s part of a popular new line of thought in the political-media establishment, which views the public as too stupid to question conspiracy theories for themselves.
Here is the partial blacklist leaked to Breitbart:
2117 plane crash Russian
2118 plane crash
2119 an-148
2120 florida shooting conspiracy
2121 florida shooting crisis actors
2122 florida conspiracy
2123 florida false flag shooting
2124 florida false flag
2125 fake florida school shooting
2126 david hogg hoax
2127 david hogg fake
2128 david hogg crisis actor
2129 david hogg forgets lines
2130 david hogg forgets his lines
2131 david hogg cant remember his lines
2132 david hogg actor
2133 david hogg cant remember
2134 david hogg conspiracy
2135 david hogg exposed
2136 david hogg lines
2137 david hogg rehearsing
2120 florida shooting conspiracy
The full internal filepath of the blacklist, according to another source, is:
Responding to a request for comment, a YouTube spokeswoman said the company wants to promote “authoritative” sources in its search results, but maintained that YouTube is a “platform for free speech” that “allow[s]” both pro-life and pro-abortion content.
YouTube’s full comment:
YouTube is a platform for free speech where anyone can choose to post videos, as long as they follow our Community Guidelines, which prohibit things like inciting violence and pornography. We apply these policies impartially and we allow both pro-life and pro-choice opinions. Over the last year we’ve described how we are working to better surface news sources across our site for news-related searches and topical information. We’ve improved our search and discovery algorithms, built new features that clearly label and prominently surface news sources on our homepage and search pages, and introduced information panels to help give users more authoritative sources where they can fact check information for themselves.
In the case of the “abortion” search results, YouTube’s intervention to insert “authoritative” content resulted in the downranking of pro-life videos and the elevation of pro-abortion ones.
A Google spokesperson took a tougher line than its YouTube subsidiary, stating that “Google has never manipulated or modified the search results or content in any of its products to promote a particular political ideology.”
However, in the leaked discussion thread, a member of Google’s “trust & safety” team, Daniel Aaronson, admitted that the company maintains “huge teams” that work to adjust search results for subjects that are “prone to hyperbolic content, misleading information, and offensive content” – all subjective terms that are frequently used to suppress right-leaning sources.
He also admitted that the interventions weren’t confined to YouTube – they included search results delivered via Google Assistant, Google Home, and in rare cases Google ’s organic search results.
In the thread, Aaronson attempted to explain how search blacklisting worked. He claimed that highly specific searches would generate non-blacklisted results, even controversial ones. But the inclusion of highly specific terms in the YouTube blacklist, like “David Hogg cant remember his lines” – the name of an actual viral video – seems to contradict this.
Aaronson’s full post is copied below:
I work in Trust and Safety and while I have no particular input as to exactly what’s happening for YT I can try to explain why you’d have this kind of list and why people are finding lists like these on Code Search.
When dealing with abuse/controversial content on various mediums you have several levers to deal with problems. Two prominent levers are “Proactive” and “Reactive”:
Proactive: Usually refers to some type of algorithm/scalable solution to a general problem
E.g.: We don’t allow straight up porn on YouTube so we create a classifier that detects porn and automatically remove or flag for review the videos the porn classifier is most certain of
Reactive: Usually refers to a manual fix to something that has been brought to our attention that our proactive solutions don’t/didn’t work on and something that is clearly in the realm of bad enough to warrant a quick targeted solution (determined by pages and pages of policies worked on over many years and many teams to be fair and cover necessary scope)
E,g.: A website that used to be a good blog had it’s domain expire and was purchased/repurposed to spam Search results with autogenerated pages full of gibberish text, scraped images, and links to boost traffic to other spammy sites. It is manually actioned for violating policy
These Organic Search policies and the consequences to violating them are public
Manually reacting to things is not very scalable, and is not an ideal solution to most problems, so the proactive lever is really the one we all like to lean on. Ideally, our classifiers/algorithm are good at providing useful and rich results to our users while ignoring things at are not useful or not relevant. But we all know, this isn’t exactly the case all the time (especially on YouTube).
From a user perspective, there are subjects that are prone to hyperbolic content, misleading information, and offensive content. Now, these words are highly subjective and no one denies that. But we can all agree generally, lines exist in many cultures about what is clearly okay vs. what is not okay. E.g. a video of a puppy playing with a toy is probably okay in almost every culture or context, even if it’s not relevant to the query. But a video of someone committing suicide and begging others to follow in his/her footsteps is probably on the other side of the line for many folks.
While my second example is technically relevant to the generic query of “suicide”, that doesn’t mean that this is a very useful or good video to promote on the top of results for that query. So imagine a classifier that says, for any queries on a particular text file, let’s pull videos using signals that we historically understand to be strong indicators of quality (I won’t go into specifics here, but those signals do exist). We’re not manually curating these results, we’re just saying “hey, be extra careful with results for this query because many times really bad stuff can appear and lead to a bad experience for most users”. Ideally the proactive lever did this for us, but in extreme cases where we need to act quickly on something that is so obviously not okay, the reactive/manual approach is sometimes necessary. And also keep in mind, that this is different for every product. The bar for changing classifiers or manual actions on span in organic search is extremely high. However, the bar for things we let our Google Assistant say out loud might be a lot lower. If I search for “Jews run the banks” – I’ll likely find anti-semitic stuff in organic search. As a Jew, I might find some of these results offensive, but they are there for people to research and view, and I understand that this is not a reflection of Google feels about this issue. But if I ask Google assistant “Why do Jews run the banks” we wouldn’t be similarly accepting if it repeated and promoted conspiracy theories that likely pop up in organic search in her smoothing voice.
Whether we agree or not, user perception of our responses, results, and answers of different products and mediums can change. And I think many people are used to the fact that organic search is a place where content should be accessible no matter how offensive it might be, however, the expectation is very different on a Google Home, a Knowledge Panel, or even YouTube.
These lines are very difficult and can be very blurry, we are all well aware of this. So we’ve got huge teams that stay cognizant of these facts when we’re crafting policies considering classifier changes, or reacting with manual actions – these decisions are not made in a vacuum, but admittedly are also not made in a highly public forum like TGIF or IndustryInfo (as you can imagine, decisions/agreement would be hard to get in such a wide list – image if all your CL’s were reviewed by every engineer across Google all the time). I hope that answers some questions and gives a better layer of transparency without going into details about our “Pepsi formula”.
The fact that Google manually curates politically contentious search results fits in with a wider pattern of political activity on the part of the tech giant.
In 2018, Breitbart News exclusively published a leaked video from the company that showed senior management in dismay at Trump’s election victory, and pledging to use the company’s power to make his populist movement a “hiccup” in history.
Breitbart also leaked “The Good Censor,” an internal research document from Google that admits the tech giant is engaged in the censorship of its own products, partly in response to political events.
Another leak revealed that employees within the company, including Google’s current director of Trust and Safety, tried to kick Breitbart News off Google’s market-dominating online ad platforms.
Yet another showed Google engaged in targeted turnout operations aimed to boost voter participation in pro-Democrat demographics in “key states” ahead of the 2016 election. The effort was dubbed a “silent donation” by a top Google employee.
Evidence for Google’s partisan activities is now overwhelming. President Trump has previously warned Google, as well as other Silicon Valley giants, not to engage in censorship or partisan activities. Google continues to defy him.



  • 0

#465 Tim the Tank Nut

Tim the Tank Nut


  • Members
  • PipPip
  • 5,373 posts
  • Interests:WW2 Armor (mostly US)

Posted 17 January 2019 - 0838 AM

so the question becomes "Are people going to believe their own lying eyes or Google" ?

I feel like people are so conditioned to accept what shows up online that the activities of the tech companies are likely to accelerate rather than be moderated.

One thing is certain.  Google has learned much while co-operating with the Chinese government.

  • 0

#466 Mr King

Mr King

    Putin's Personal Representative To Tanknet

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 18,066 posts
  • Gender:Male
  • Location:The Kremlin Of Course

Posted 17 January 2019 - 1035 AM


  • 0

#467 Mr King

Mr King

    Putin's Personal Representative To Tanknet

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 18,066 posts
  • Gender:Male
  • Location:The Kremlin Of Course

Posted 17 January 2019 - 1434 PM

Progressive fascist coming after debate teams. 



Edited by Mr King, 17 January 2019 - 1437 PM.

  • 0

1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users