Popular shared stories on NewsBlur.
1745 stories
·
35790 followers

Rainbow

3 Comments and 17 Shares
Listen, in a few thousand years you'll invent a game called 'SimCity' which has a 'disaster' button, and then you'll understand.
Read the whole story
popular
6 hours ago
reply
Share this story
Delete
3 public comments
llucax
2 days ago
reply
Disaster button, how great!
Berlin
tedder
2 days ago
reply
Great alt text.
Uranus
JayM
2 days ago
reply
Hehehehe.
Boston Metro Area

Ditch “Culture Fit”

2 Comments and 9 Shares

A couple different talks at OSCON got me thinking about the unhealthy results of hiring on the basis of “culture fit”.

drinking-culture

Slide from Casey West’s OSCON talk that says “Never in the history of my career has my ability to drink beer made me better at solving a business problem.”

What is company culture? Is it celebrating with co-workers around the company keg? Or would that exclude non-drinkers? Does your company value honest and direct feedback in meetings? Does that mean introverts and remote workers are talked over? Are long working hours and individual effort rewarded, to the point that people who value family are passed up for promotion?
drinking-culture

Often times teams who don’thave a diverse network end up hiring people who have similar hobbies, backgrounds, and education. Companies need to avoid “group think” and focus on increasing diversity, because studies have shown that gender-diverse companies are 15% more likely to financially outperform other companies, and racially-diverse companies are 35% more likely to outperform. Other studies have shown that diversity can lead to more internal conflict, but the end result is a more productive team.

How do you change your company culture to value a diverse team? It’s much more than simply hiring more diverse people or making people sit through an hour of unconscious bias training. At OSCON, Casey West talked about some examples of company culture that create an inclusive environment where diverse teams can thrive:

  • Blame-free teams
  • Knowledge sharing culture
  • Continuous learning
  • No judgement on asking questions
  • Continuous feedback
  • Curiosity about different cultures
  • Individually defined work-life balance
  • Valuing empathy

For example, if you have a culture where there’s no judgement on asking questions or raising issues and people are naturally curious about different cultures, it’s easy for a team member to suggest a new feature that might make your product appeal to a broader customer base. After years of analyzing teams, Google found that the most productive teams foster a sense of “psychological safety”, a shared belief in expressing ideas without fear of humiliation.

The other problem with “culture fit” is that it’s an unevenly applied standard. An example of this was Kevin Stewart’s OSCON talk called “Managing While Black”. When Kevin emulated the company culture of pushing back on unnecessary requirements and protecting his team, he was told to “work on his personal brand”. White coworkers were reading him as “the angry black guy.” When he dialed it back, he was told he was “so articulate”, which is a non-compliment that relies on the stereotype that all African Americans are either uneducated or recent immigrants.

In both cases, even though his project was successful, Kevin had his team (and his own responsibilities) scaled back. After years of watching less successful white coworkers get promoted, he was told by management that they simply didn’t “see him in a leadership role.” Whether or not people of color emulate the white leadership behavior and corporate culture around them, they are punished because their coworkers are biased towards white leaders.

As a woman in technical leadership positions, I’ve faced similar “culture fit” issues. I’ve been told by one manager that I needed to be the “one true technical voice” (meaning as a leader I need to shout over the mansplainy guys on my team). And yet, when I clearly articulate valid technical or resourcing concerns to management, I’m “dismissive” of their goals. When I was a maintainer in the Linux kernel and adamantly pushed back on a patch that wall-papered over technical debt, I was told by another maintainer to “calm down”. (If you don’t think that’s a gendered slur based on the stereotype that women are “too emotional”, try imagining telling Linus Torvalds to calm down when he gets passionate about technical debt.)

The point is, traditional “cultural fit” narratives and leadership behaviors only benefit the white cis males that created these cultural norms. Culture can be manipulated in the span of a couple years to enforce or change the status quo. For example, computer programming used to be dominated by women, before hiring “personality tests” biased for men who displayed “disinterest in people”.

We need to be deliberate about the company culture we cultivate. By hiring for empathy, looking for coworkers who are curious about different cultures, and rewarding leaders who don’t fit our preconceived notions, we create an inclusive work environment where people are free to be their authentic selves. Project Include has more resources and examples for people who are interested in changing their company’s culture.


Thanks for reading! If you want me to write more posts on diversity in tech, please consider donating to my Patreon.<>

Read the whole story
sirshannon
21 hours ago
reply
"studies have shown that gender-diverse companies are 15% more likely to financially outperform other companies, and racially-diverse companies are 35% more likely to outperform. "
popular
8 hours ago
reply
pfctdayelise
22 hours ago
reply
Melbourne, Australia
Share this story
Delete
1 public comment
tante
2 hours ago
reply
Ditch "culture fit"
Oldenburg/Germany

Photographer Denis Cherim’s ‘Coincidence Project’ Explores Uncanny Moments of Synchronicity

1 Comment and 14 Shares
cherim-1

All photos © Denis Cherim

With an eye for unusual juxtapositions and serendipitous moments where the universe seems to synchronize itself just so, photographer Denis Cherim is there with his camera seeing what the rest of us do not. The ongoing series called the Coincidence Project incorporates a wide variety of photographic approaches from landscapes to street photography and occasionally portraiture. Gathered here are some of our favorites from the last few years, but you can see hundreds more photos by Cherim over on Flickr and Facebook . (via Booooooom)

cherim-2

cherim-3

cherim-10

cherim-4

cherim-5

cherim-6

cherim-7

cherim-8

cherim-9

Read the whole story
TheRomit
11 hours ago
reply
santa clara, CA
popular
1 day ago
reply
sirshannon
1 day ago
reply
Share this story
Delete
1 public comment
jhamill
1 day ago
reply
These are all fantastic.
California

Fizz Buzz in Tensorflow

5 Comments and 10 Shares
Comments

interviewer: Welcome, can I get you coffee or anything? Do you need a break?

me: No, I've probably had too much coffee already!

interviewer: Great, great. And are you OK with writing code on the whiteboard?

me: It's the only way I code!

interviewer: ...

me: That was a joke.

interviewer: OK, so are you familiar with "fizz buzz"?

me: ...

interviewer: Is that a yes or a no?

me: It's more of a "I can't believe you're asking me that."

interviewer: OK, so I need you to print the numbers from 1 to 100, except that if the number is divisible by 3 print "fizz", if it's divisible by 5 print "buzz", and if it's divisible by 15 print "fizzbuzz".

me: I'm familiar with it.

interviewer: Great, we find that candidates who can't get this right don't do well here.

me: ...

interviewer: Here's a marker and an eraser.

me: [thinks for a couple of minutes]

interviewer: Do you need help getting started?

me: No, no, I'm good. So let's start with some standard imports:

import numpy as np
import tensorflow as tf

interviewer: Um, you understand the problem is fizzbuzz, right?

me: Do I ever. So, now let's talk models. I'm thinking a simple multi-layer-perceptron with one hidden layer.

interviewer: Perceptron?

me: Or neural network, whatever you want to call it. We want the input to be a number, and the output to be the correct "fizzbuzz" representation of that number. In particular, we need to turn each input into a vector of "activations". One simple way would be to convert it to binary.

interviewer: Binary?

me: Yeah, you know, 0's and 1's? Something like:

def binary_encode(i, num_digits):
    return np.array([i >> d & 1 for d in range(num_digits)])

interviewer: [stares at whiteboard for a minute]

me: And our output will be a one-hot encoding of the fizzbuzz representation of the number, where the first position indicates "print as-is", the second indicates "fizz", and so on:

def fizz_buzz_encode(i):
    if   i % 15 == 0: return np.array([0, 0, 0, 1])
    elif i % 5  == 0: return np.array([0, 0, 1, 0])
    elif i % 3  == 0: return np.array([0, 1, 0, 0])
    else:             return np.array([1, 0, 0, 0])

interviewer: OK, that's probably enough.

me: That's enough setup, you're exactly right. Now we need to generate some training data. It would be cheating to use the numbers 1 to 100 in our training data, so let's train it on all the remaining numbers up to 1024:

NUM_DIGITS = 10
trX = np.array([binary_encode(i, NUM_DIGITS) for i in range(101, 2 ** NUM_DIGITS)])
trY = np.array([fizz_buzz_encode(i)          for i in range(101, 2 ** NUM_DIGITS)])

interviewer: ...

me: Now we need to set up our model in tensorflow. Off the top of my head I'm not sure how many hidden units to use, maybe 10?

interviewer: ...

me: Yeah, possibly 100 is better. We can always change it later.

NUM_HIDDEN = 100

We'll need an input variable with width NUM_DIGITS, and an output variable with width 4:

X = tf.placeholder("float", [None, NUM_DIGITS])
Y = tf.placeholder("float", [None, 4])

interviewer: How far are you intending to take this?

me: Oh, just two layers deep -- one hidden layer and one output layer. Let's use randomly-initialized weights for our neurons:

def init_weights(shape):
    return tf.Variable(tf.random_normal(shape, stddev=0.01))

w_h = init_weights([NUM_DIGITS, NUM_HIDDEN])
w_o = init_weights([NUM_HIDDEN, 4])

And we're ready to define the model. As I said before, one hidden layer, and let's use, I don't know, ReLU activation:

def model(X, w_h, w_o):
    h = tf.nn.relu(tf.matmul(X, w_h))
    return tf.matmul(h, w_o)

We can use softmax cross-entropy as our cost function and try to minimize it:

py_x = model(X, w_h, w_o)

cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(py_x, Y))
train_op = tf.train.GradientDescentOptimizer(0.05).minimize(cost)

interviewer: ...

me: And, of course, the prediction will just be the largest output:

predict_op = tf.argmax(py_x, 1)

interviewer: Before you get too far astray, the problem you're supposed to be solving is to generate fizz buzz for the numbers from 1 to 100.

me: Oh, great point, the predict_op function will output a number from 0 to 3, but we want a "fizz buzz" output:

def fizz_buzz(i, prediction):
    return [str(i), "fizz", "buzz", "fizzbuzz"][prediction]

interviewer: ...

me: So now we're ready to train the model. Let's grab a tensorflow session and initialize the variables:

with tf.Session() as sess:
    tf.initialize_all_variables().run()

Now let's run, say, 1000 epochs of training?

interviewer: ...

me: Yeah, maybe that's not enough -- so let's do 10000 just to be safe.

And our training data are sequential, which I don't like, so let's shuffle them each iteration:

    for epoch in range(10000):
        p = np.random.permutation(range(len(trX)))
        trX, trY = trX[p], trY[p]

And each epoch we'll train in batches of, I don't know, 128 inputs?

BATCH_SIZE = 128

So each training pass looks like

        for start in range(0, len(trX), BATCH_SIZE):
            end = start + BATCH_SIZE
            sess.run(train_op, feed_dict={X: trX[start:end], Y: trY[start:end]})

and then we can print the accuracy on the training data, since why not?

        print(epoch, np.mean(np.argmax(trY, axis=1) ==
                             sess.run(predict_op, feed_dict={X: trX, Y: trY})))

interviewer: Are you serious?

me: Yeah, I find it helpful to see how the training accuracy evolves.

interviewer: ...

me: So, once the model has been trained, it's fizz buzz time. Our input should just be the binary encoding of the numbers 1 to 100:

    numbers = np.arange(1, 101)
    teX = np.transpose(binary_encode(numbers, NUM_DIGITS))

And then our output is just our fizz_buzz function applied to the model output:

    teY = sess.run(predict_op, feed_dict={X: teX})
    output = np.vectorize(fizz_buzz)(numbers, teY)

    print(output)

interviewer: ...

me: And that should be your fizz buzz!

interviewer: Really, that's enough. We'll be in touch.

me: In touch, that sounds promising.

interviewer: ...

Postscript

I didn't get the job. So I tried actually running this (code on GitHub), and it turned out it got some of the outputs wrong! Thanks a lot, machine learning!

In [185]: output
Out[185]:
array(['1', '2', 'fizz', '4', 'buzz', 'fizz', '7', '8', 'fizz', 'buzz',
       '11', 'fizz', '13', '14', 'fizzbuzz', '16', '17', 'fizz', '19',
       'buzz', '21', '22', '23', 'fizz', 'buzz', '26', 'fizz', '28', '29',
       'fizzbuzz', '31', 'fizz', 'fizz', '34', 'buzz', 'fizz', '37', '38',
       'fizz', 'buzz', '41', '42', '43', '44', 'fizzbuzz', '46', '47',
       'fizz', '49', 'buzz', 'fizz', '52', 'fizz', 'fizz', 'buzz', '56',
       'fizz', '58', '59', 'fizzbuzz', '61', '62', 'fizz', '64', 'buzz',
       'fizz', '67', '68', '69', 'buzz', '71', 'fizz', '73', '74',
       'fizzbuzz', '76', '77', 'fizz', '79', 'buzz', '81', '82', '83',
       '84', 'buzz', '86', '87', '88', '89', 'fizzbuzz', '91', '92', '93',
       '94', 'buzz', 'fizz', '97', '98', 'fizz', 'fizz'],
      dtype='<U8')

I guess maybe I should have used a deeper network.


Comments
Read the whole story
TheRomit
11 hours ago
reply
😂😂😂
santa clara, CA
sirshannon
1 day ago
reply
The correct way to answer this sort of question.
skorgu
2 days ago
reply
Glorious!
mgeraci
1 day ago
reply
New York, NY
popular
1 day ago
reply
Share this story
Delete
2 public comments
tingham
1 day ago
reply
Reminds me of the amazon interview.
Cary, NC
skittone
2 days ago
reply
This thoroughly amused me.

Saturday Morning Breakfast Cereal - Dirty Talk

1 Comment and 7 Shares

Hovertext: Personally, I always like to start sex with an apology.


New comic!
Today's News:

 Single-use monocles are BACK IN STOCK!

Read the whole story
popular
2 days ago
reply
Share this story
Delete
1 public comment
jlvanderzwan
4 days ago
reply
On the importance of understanding the difference between talking dirty, and meta-talking dirty...

Laser Products

3 Comments and 13 Shares
ERRORS: HAIR JAM. COLOR-SAFE CONDITIONER CARTRIDGE RUNNING LOW. LEGAL-SIZE HAIR TRAY EMPTY, USING LETTER-SIZE HAIR ONLY.
Read the whole story
popular
7 days ago
reply
Share this story
Delete
3 public comments
rraszews
9 days ago
reply
Just a reminder, one of the standard UNIX printer error messages is "Printer on fire"
olliejones
8 days ago
I worked on one of those printers. An old Xerox-made print engine fused the toner with a glowing wire. If the paper jammed (which it did a lot) the printer caught fire.
pawel_lasek
8 days ago
Supposedly it goes back to original line printers with rotating metal type heads... Printer ink&paper fire 🔥
Covarr
9 days ago
reply
I always wanted a printer with a laser sight, so I could better aim my documents.
Moses Lake, WA
kyounger
9 days ago
reply
PC LOAD LATHER
Next Page of Stories