Front-end web and user experience designer based in New York, NY.
988 stories
·
56 followers

Hide Under The Bakery Day!

1 Share
You're hiding in the basement under the bakery for the next few hours until things calm down. The baker already saw you down there when he came down for a sack of flour. He nodded in your direction, then he went back upstairs.

Next time he comes down, he leaves a loaf of bread and a pat of butter and some water and you eat. You were starved. You didn't know you'd have to hide today so you didn't get to eat first. You also didn't get to go to the bathroom.

"Bathroom top of the stairs," is the first thing the baker says to you when he comes down an hour later. He heads upstairs leaving the door open for you. You assume it's safe for you to show yourself above ground. If the baker wanted to hand you over he could just go ahead and do that, so going upstairs shouldn't put you in any more danger than you're already in. You head up and the bathroom's right outside the basement door, to the right.

When you head back downstairs there's a plate of cookies waiting for you. Another glass of water. Also a rolled up apron placed as a pillow at the head of a sack of flour. You eat the cookies then lay down and you sleep.

When you wake the baker is standing over you. "Now or never," he says and he leads you upstairs and out the door where you see it's just before dawn. You climb into the back of his van, into the space he's made between cake box flats and sacks of flour. He moves a couple laundry sacks of dirty aprons over you to block you from any curious eyes then you lay still while he drives to the people who'll keep you safe.

Happy Hide Under The Bakery Day!
Read the whole story
mgeraci
175 days ago
reply
New York, NY
Share this story
Delete

Furniture Design History: Why Do Wingback Chairs Have Wings?

1 Comment and 11 Shares

If form follows function, then to the modern-day observer, the design of wingback chairs does not appear to make sense. Why add structure to, and upholster, the sides of a chair above the armrest? The sitter needs no lateral support and is not likely to even contact the wings in regular use.

The answer has to do with climate control, or more precisely the lack of it, in 1600s Britain, where this chair style was first invented. Lacking weatherstripping, caulk and triple-glazed windows, houses and buildings of the time were drafty affairs. The wingback chair was designed to be sat in front of the dominant heating method of the time, fireplaces, while the wings on the side prevented drafts from slicing through your little cocoon of warmth. The pronounced protrusion of the wingtips was to keep the breeze off of your ears and neck.

The earliest versions were made completely of wood, and upholstery was sparse.

Even when the chairs first did become fully upholstered, it was more springs and horsehair than plush polyurethane foam, as you can see in this cutaway.

Nowadays wingback chairs can be had in all manner of garish patterns, and with stuffing up the yin-yang…

...or in versions where the materials choice makes no sense given the original context.

To my eye, the classic leather ones look best.

In any case, now you can see that Arne Jacobsen's egg chair, when it came out in 1958, must've looked like the new VW Bug of its day: A modernized nod to the past.


Read the whole story
popular
239 days ago
reply
mgeraci
240 days ago
reply
New York, NY
samuel
240 days ago
reply
The Haight in San Francisco
Share this story
Delete
1 public comment
satadru
240 days ago
reply
TIL...
New York, NY

Upping our insult game

4 Comments and 14 Shares

Carmen Fought observes that "Fellow citizens, we have to up our insult game. The Scots are making us look like wankers. ‪#‎mangledapricothellbeast‬".

Certainly the Scots have taught us a wide varietyof new words and insult phrasesin response to Donald Trump's tweet about Brexit.

And so on…

Given Mr. Trump's role in pushing the envelope of American political insults, many will consider this to be karmic justice. It's getting picked up on the backstreets of the internet, and may have some of the same impact on his future that "lyin' Ted" and "liddle Marco" had on the careers of his previous opponents.

There were also apparently some more subtle forms of communication associated with his trip:

Read the whole story
mgeraci
418 days ago
reply
New York, NY
popular
418 days ago
reply
Share this story
Delete
4 public comments
skorgu
417 days ago
reply
"incompressible jizztrumpet" makes me giggle uncontrollably every time.
gangsterofboats
419 days ago
reply
Because this *totally* proves that Trump is wrong. /s
jlvanderzwan
419 days ago
It disproves the old claim that only Republicans know how to sling insults
dukeofwulf
417 days ago
Astute observation, gangster. In fact, there are PLENTY of detailed takedowns of both Brexit and Trump's response to it out there, including the Scalzi post immediately below (assuming that you're viewing this on the "People Have Spoken" feed). This is more of a "laughing so we can stop crying" sort of post.
dukeofwulf
417 days ago
I'm also partial to this one: http://gawker.com/1782555070
glenn
419 days ago
reply
tiny fingered, Cheeto-faced, ferret wearing shitgibbon
Waterloo, Canada
jepler
419 days ago
reply
you polyester cockwomble
Earth, Sol system, Western spiral arm

Parque Ecológico do Tororó, Brazil

3 Shares


Parque Ecológico do Tororó, Brazil

Read the whole story
mgeraci
436 days ago
reply
New York, NY
Share this story
Delete

Fizz Buzz in Tensorflow

6 Comments and 12 Shares
Comments

interviewer: Welcome, can I get you coffee or anything? Do you need a break?

me: No, I've probably had too much coffee already!

interviewer: Great, great. And are you OK with writing code on the whiteboard?

me: It's the only way I code!

interviewer: ...

me: That was a joke.

interviewer: OK, so are you familiar with "fizz buzz"?

me: ...

interviewer: Is that a yes or a no?

me: It's more of a "I can't believe you're asking me that."

interviewer: OK, so I need you to print the numbers from 1 to 100, except that if the number is divisible by 3 print "fizz", if it's divisible by 5 print "buzz", and if it's divisible by 15 print "fizzbuzz".

me: I'm familiar with it.

interviewer: Great, we find that candidates who can't get this right don't do well here.

me: ...

interviewer: Here's a marker and an eraser.

me: [thinks for a couple of minutes]

interviewer: Do you need help getting started?

me: No, no, I'm good. So let's start with some standard imports:

import numpy as np
import tensorflow as tf

interviewer: Um, you understand the problem is fizzbuzz, right?

me: Do I ever. So, now let's talk models. I'm thinking a simple multi-layer-perceptron with one hidden layer.

interviewer: Perceptron?

me: Or neural network, whatever you want to call it. We want the input to be a number, and the output to be the correct "fizzbuzz" representation of that number. In particular, we need to turn each input into a vector of "activations". One simple way would be to convert it to binary.

interviewer: Binary?

me: Yeah, you know, 0's and 1's? Something like:

def binary_encode(i, num_digits):
    return np.array([i >> d & 1 for d in range(num_digits)])

interviewer: [stares at whiteboard for a minute]

me: And our output will be a one-hot encoding of the fizzbuzz representation of the number, where the first position indicates "print as-is", the second indicates "fizz", and so on:

def fizz_buzz_encode(i):
    if   i % 15 == 0: return np.array([0, 0, 0, 1])
    elif i % 5  == 0: return np.array([0, 0, 1, 0])
    elif i % 3  == 0: return np.array([0, 1, 0, 0])
    else:             return np.array([1, 0, 0, 0])

interviewer: OK, that's probably enough.

me: That's enough setup, you're exactly right. Now we need to generate some training data. It would be cheating to use the numbers 1 to 100 in our training data, so let's train it on all the remaining numbers up to 1024:

NUM_DIGITS = 10
trX = np.array([binary_encode(i, NUM_DIGITS) for i in range(101, 2 ** NUM_DIGITS)])
trY = np.array([fizz_buzz_encode(i)          for i in range(101, 2 ** NUM_DIGITS)])

interviewer: ...

me: Now we need to set up our model in tensorflow. Off the top of my head I'm not sure how many hidden units to use, maybe 10?

interviewer: ...

me: Yeah, possibly 100 is better. We can always change it later.

NUM_HIDDEN = 100

We'll need an input variable with width NUM_DIGITS, and an output variable with width 4:

X = tf.placeholder("float", [None, NUM_DIGITS])
Y = tf.placeholder("float", [None, 4])

interviewer: How far are you intending to take this?

me: Oh, just two layers deep -- one hidden layer and one output layer. Let's use randomly-initialized weights for our neurons:

def init_weights(shape):
    return tf.Variable(tf.random_normal(shape, stddev=0.01))

w_h = init_weights([NUM_DIGITS, NUM_HIDDEN])
w_o = init_weights([NUM_HIDDEN, 4])

And we're ready to define the model. As I said before, one hidden layer, and let's use, I don't know, ReLU activation:

def model(X, w_h, w_o):
    h = tf.nn.relu(tf.matmul(X, w_h))
    return tf.matmul(h, w_o)

We can use softmax cross-entropy as our cost function and try to minimize it:

py_x = model(X, w_h, w_o)

cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(py_x, Y))
train_op = tf.train.GradientDescentOptimizer(0.05).minimize(cost)

interviewer: ...

me: And, of course, the prediction will just be the largest output:

predict_op = tf.argmax(py_x, 1)

interviewer: Before you get too far astray, the problem you're supposed to be solving is to generate fizz buzz for the numbers from 1 to 100.

me: Oh, great point, the predict_op function will output a number from 0 to 3, but we want a "fizz buzz" output:

def fizz_buzz(i, prediction):
    return [str(i), "fizz", "buzz", "fizzbuzz"][prediction]

interviewer: ...

me: So now we're ready to train the model. Let's grab a tensorflow session and initialize the variables:

with tf.Session() as sess:
    tf.initialize_all_variables().run()

Now let's run, say, 1000 epochs of training?

interviewer: ...

me: Yeah, maybe that's not enough -- so let's do 10000 just to be safe.

And our training data are sequential, which I don't like, so let's shuffle them each iteration:

    for epoch in range(10000):
        p = np.random.permutation(range(len(trX)))
        trX, trY = trX[p], trY[p]

And each epoch we'll train in batches of, I don't know, 128 inputs?

BATCH_SIZE = 128

So each training pass looks like

        for start in range(0, len(trX), BATCH_SIZE):
            end = start + BATCH_SIZE
            sess.run(train_op, feed_dict={X: trX[start:end], Y: trY[start:end]})

and then we can print the accuracy on the training data, since why not?

        print(epoch, np.mean(np.argmax(trY, axis=1) ==
                             sess.run(predict_op, feed_dict={X: trX, Y: trY})))

interviewer: Are you serious?

me: Yeah, I find it helpful to see how the training accuracy evolves.

interviewer: ...

me: So, once the model has been trained, it's fizz buzz time. Our input should just be the binary encoding of the numbers 1 to 100:

    numbers = np.arange(1, 101)
    teX = np.transpose(binary_encode(numbers, NUM_DIGITS))

And then our output is just our fizz_buzz function applied to the model output:

    teY = sess.run(predict_op, feed_dict={X: teX})
    output = np.vectorize(fizz_buzz)(numbers, teY)

    print(output)

interviewer: ...

me: And that should be your fizz buzz!

interviewer: Really, that's enough. We'll be in touch.

me: In touch, that sounds promising.

interviewer: ...

Postscript

I didn't get the job. So I tried actually running this (code on GitHub), and it turned out it got some of the outputs wrong! Thanks a lot, machine learning!

In [185]: output
Out[185]:
array(['1', '2', 'fizz', '4', 'buzz', 'fizz', '7', '8', 'fizz', 'buzz',
       '11', 'fizz', '13', '14', 'fizzbuzz', '16', '17', 'fizz', '19',
       'buzz', '21', '22', '23', 'fizz', 'buzz', '26', 'fizz', '28', '29',
       'fizzbuzz', '31', 'fizz', 'fizz', '34', 'buzz', 'fizz', '37', '38',
       'fizz', 'buzz', '41', '42', '43', '44', 'fizzbuzz', '46', '47',
       'fizz', '49', 'buzz', 'fizz', '52', 'fizz', 'fizz', 'buzz', '56',
       'fizz', '58', '59', 'fizzbuzz', '61', '62', 'fizz', '64', 'buzz',
       'fizz', '67', '68', '69', 'buzz', '71', 'fizz', '73', '74',
       'fizzbuzz', '76', '77', 'fizz', '79', 'buzz', '81', '82', '83',
       '84', 'buzz', '86', '87', '88', '89', 'fizzbuzz', '91', '92', '93',
       '94', 'buzz', 'fizz', '97', '98', 'fizz', 'fizz'],
      dtype='<U8')

I guess maybe I should have used a deeper network.


Comments
Read the whole story
mgeraci
451 days ago
reply
New York, NY
popular
451 days ago
reply
Share this story
Delete
6 public comments
sness
447 days ago
reply
lol
xbai
444 days ago
Lol, typical
TheRomit
450 days ago
reply
😂😂😂
santa clara, CA
tingham
451 days ago
reply
Reminds me of the amazon interview.
Cary, NC
sirshannon
451 days ago
reply
The correct way to answer this sort of question.
skittone
452 days ago
reply
This thoroughly amused me.
skorgu
452 days ago
reply
Glorious!

The Offenders

5 Comments and 15 Shares
The Offenders
Read the whole story
popular
458 days ago
reply
mgeraci
459 days ago
reply
New York, NY
Share this story
Delete
5 public comments
josephwebster
449 days ago
reply
Read it carefully
Denver, CO, USA
norb
452 days ago
reply
That one took me a second
clmbs.oh
kleer001
458 days ago
reply
Worth the wait
davelevy
457 days ago
Thanks for mentioning it. Thought it was a bad call because it was taking so long
theprawn
459 days ago
reply
Oh snap
jsled
459 days ago
reply
:D
South Burlington, Vermont
Next Page of Stories