Dan Dot Blog

Based on a true story


Well I finally made it to Beijing! I guess I actually arrived last Saturday, but things have been so crazy busy I haven’t had time to write, but I want to try to start keeping a more consistent journal here. Otherwise, the task of writing a comprehensive update starts to get daunting so I put it off and it just gets to be an ever heftier writing assignment ūüėõ

I arrived very well rested and full of energy thanks to my awesome flight arrangements (thanks Adam!) and got a lift from the airport to my hotel from Lu Li, the husband of a colleague who lives in Beijing.

I’m in China primarily to work to get a new study set up that will be run at both Bei Da (Peking University) and the Psych Institute at the Chinese Academy of Sciences.

Xiaobei is a student at Bei Da who has been really helpful to me. We became buddies in Shanghai when we both attended ICDL last summer and she has been a great companion for this trip, as well.

So far I’ve been eating lots of yummy food (pictured: giant fish head), I hiked a big section of the great wall which was crazy intense (very vertical and falling apart), and have gotten a lot of good work done academically. I’d love to write more but my battery is dying so I’ll sign off for now, but more is coming soon ūüôā

I hope everyone back home is doing well!




April 8, 2010 Posted by | Personal | , , , , , | 2 Comments


Right now I’m only about an hour from Beijing. The flight so far has been enormously pleasant. Thanks to my former roommate Adam’s general awesomeness I spent most of the flight very nearly fully reclined in business class. I managed to sleep probably 8 or 9 hours of the flight thanks to a little ambien and my general sleep deprivation schedule lately, so I’m confident I’ll be landing energize and ready to explore.

I didn’t get any work done on the plane, which is fine as sleeping was really objective number one. While I did manage to get a ton of data loaded with me before I left my office at 4 AM the night before I departed, I have a backlog of analytic and preparatory tasks that need to get accomplished before I begin working in Beijing. I’ve heard that there may be an extended holiday this week in light of the expo in Shanghai, but I don’t know if that will impact things in Beijing.

April 3, 2010 Posted by | Personal | , , , | Leave a comment


After a long and weary trip of frequently perturbed somnolence, I have arrive in the windy city.

Highlights included extraordinarily salty sunflower seeds and an overturned semi.

Waypoint 1 accomplished.

April 1, 2010 Posted by | Personal | Leave a comment

Going to China

Hi everyone!

This is my first test of a publishing system that I’m hoping will allow me to keep my WordPress blog going while I’m in China for the next couple weeks. Keep an eye out for new posts as I have more adventures (and hopefully not too many misadventures) in China!


March 31, 2010 Posted by | Personal | Leave a comment

Honey Bees and 3rd Grade Charm

Last Thursday I was transported back to middle school.

I have a friend who, each February, does this incredibly sweet thing. There’s a whole preparation that goes into it that involves fund-raising and preparation but ultimately reaches it peak with her costumed as a fairy, distributing hand-made cards, candy, and good cheer, to those most in need of it on Valentine’s Day (think area hospices, pediatric wards, etc). I really really like this idea, and was thrilled when I was invited to help prepare the hand-outs.

I forgot that I have poor fine motor coordination, ADD, am colorblind, lack any artistic vision, and am miserably uncreative when it comes to aesthetics. There is nothing more intimidating than a blank canvas, or, for me, a blank piece of colored construction paper.

As I arrived at the bar where the assembly line was already in full swing, I sat down and was immediately overwhelmed by the situation. I grew quiet and tense as my inner monologue, usually easily excerpted verbally into conversation, went silent. My eyes nervously darted around the room and suddenly I was back in time, trying to kludge together a papier mache snowman, or pop-up book illustrating my knowledge of Nicaragua, or cut-out snowman. More than trying to accomplish something I was really just trying to find a way to disappear, bide my time until the activity ceased, and then run away as fast as I could. I’ve always loved school—it was an arena where I excelled and drew enormous sums of confidence and identity—but on arts and crafts day I was just a terrified poor student, trying to keep my head down and get the hell out of the situation.

Weirdly, this is the strategy I adopted again. As I looked around the table I saw my friends gleefully glue-sticking, integrating stickers, glitter, markers, and crayons into visual masterpieces that I knew would really make somebody feel special. Meanwhile, I fumbled through my box of crayons. The paper I had selected was green, so to me it looked brown. That meant that decorating it with green was out. Brown is a dumb color, and the paper already looked brown, so strike that. Red seemed like a natural choice for Valentine’s Day, but every time I tried to find a red crayon I pulled out either a green, limegreen, seagreen, brown, limebrown, seascum brown-green, or browngreenbrown crayon. When I finally did find a red crayon, it seemed dull and uninteresting, like dried clay. I would’ve been unexcited to find this color in nature, so why would somebody want to look at it on my card?

Fortunately, nobody seemed to be paying much attention to me, and repeatedly changing crayons at least gave off the illusion of progress. I was succeeding in getting out of the situation! This small triumph was quickly overtaken by pangs of guilt as I realized that this card was not something for my mother who had long ago downwardly revised any expectations of me expressing my affection and gratitude through the visual arts, but was meant to warm the heart of some downtrodden stranger. I was going to have to dig deep and come up with something moving. Surely if I opened my own heart and peered into it honestly something creative, adorable, and uplifting would emerge.

This resulted in the idea of drawing a flower on the upper right hand corner of the front of the card that so far only had a clumsily cut out heart pasted to it, surrounded in glitter glue that was supposed to be a concentric heart but instead looked more like I had been holding onto a glue gun that unexpectedly went off and I had lazily done my best to capture its issue on unused paper by lazily avoiding the only contribution I had so far made to it.

Moved to action by my vision of a flower (which to be honest did not come from the depths of my soul but instead by squinting at other people’s cards, treating them like Rorshach blots for inspiration), I scribbled away with the crayon that happened to be in my hand. The result was disappointing. To me it looked like I had wiped mud on some slightly more dried mud, and not only that, it looked more like a “Y” than a flower. Not one to be discouraged, I decided that when life gave me uninspiring lemongrass, I could make lemonade, or at least an honest-to-goodness “Y”. Without thinking I elongated the “stem” of my flower, and now had the beginnings of a word. The same muddy crayon easily multiplied this zygote of a thought into “You.”

Word complete, I decided to go for broke and make a sentence. I had a subject, so why not add a predicate? Unwilling to think ahead but not wanting to draw myself into a corner, I went with the most general verb that I could think of, “are,” and hastily plopped it in the same lazy scrawl to the bottom of the card. The sentence was grammatically complete, but I realized that I now shouldered a heavy burden. Telling somebody what they are is something not to be done lightly. Even when complimenting people, I tend to avoid it. “You look pretty today!” or, “You smell nice!” and even when not using some conjugation of “to be” would be too awkward, I can at least contract it: “You’re good at this!” I had committed a full word to “are” and anything that followed it would necessarily carry some weight. ¬†There’s an ambiguity with “to be” in English in that it can be interpreted either to indicate permanence or transience. Spanish does a better job distinguishing these two notions with estar and ser (also drawing seemingly arbitrary distinctions between them when describing the weather or other weirdly grammatically cordoned-off domains), but English leaves it up to the listener’s imagination. I had never met the imaginary person who would be reading my card, so what business did I have telling them what they were? I could cop out and make a present progressive (“You are reading this card!” *snicker*) but that seemed really lame and unlikely to brighten the day of whoever my unlucky recipient was to be. What would I want to be told I was? Could I leverage the permanence of “are,” make a value judgement that lay in the realm of possibility, and simultaneously spread cheer and love? After discarding “so special” and “really neat,” I settled upon “loved” and I decided that I could make it true. Whoever received my card, I would love them, indeed, already did. I pitied them a little, too, that they should receive my card, and not the handsome, aesthetic gift one of my tablemates crafted, but ugly card or not, what I could offer was love, and isn’t that what Valentine’s Day is all about? Satisfied with the sentence, I adhered some other prefab cuteness in the form of stickers to my card and tossed it in the box where finished product went, feeling more confident that I could make some contribution to the group’s effort.

Picking up steam, I moved on to my next charm. I had had such good luck following my random idea to draw a flower/Y I decided to ask the stochasticity generator in my brain to give me another random image. Embarrassingly, it came back with a Venn Diagram.

Alright, I can draw a Venn Diagram. Maybe the circles won’t be shapely, but at least I can draw it, and if I color it maybe it will look cool and deco to somebody. My hands strayed from the Platonic form in my head, but when they were finished it did look kind of like a Venn Diagram, but I was somewhat worried by how narrow the overlap was. Casting aside my concerns, I colored it. This time I had started with white paper so as to not immediately render half of my crayon¬†palette¬†redundant to my colorblind eye, so I lit it up with blues and yellows. When I was finished, I saw what had disturbed when it had merely been an outline. Rendered in color, the narrow overlap in the middle looked more like a slit, a narrow opening that, when set between two ovals seemed sexual and dirty. My Rorshach approach had come back to bite me in the ass as the only thing I could see in my Venn Diagram was a vagina.

Panicking, I scraped for things to turn my dirty Venn Diagram into. Sticking with fertility, I thought I could do a birds and the bees theme, which was a nod to my original, dirty interpretation of my art, but had much greater potential for cuteness. The Venn Diagram would not have made a flight worthy bird, so instead I made it a butterfly, and it actually came out pretty well. That just left a bee, and for the first time, I knew exactly what I wanted to draw before I started.

It came out fantastic! The bee was innocent and charming, trying its best, in its awkward way, to make you smile, and served as a weird self-portrait of its creator. As the bee tugged at my heart strings, I realized that although I was not likely to move anyone to tears with the beauty of my art, at the very least I could settle for sweet and make an honest attempt to create something that would make somebody happy. I hope that whoever got my cards experienced some small pleasure when looking at them. I know that that bee was hard to part with, so it had better be magneted to somebody’s refrigerator right now, offering its innocent charm to the beholder.

If I’m going to fail at something anyway, I might as well do it cutely.

February 16, 2010 Posted by | Personal | , , , , , , , , , | Leave a comment

Modeling Reality

I had started this long post on data, open source development, and greater availability of public records, but it’s sounding like I usually do when trying to forcibly relate 10 different themes I have playing through my brain. I’m sure none of you want to hear my sophomoricly newspaperman pitch of a subject that, while terribly exciting to me, I can hardly claim any authority on. So instead I’m going to start with my experiences, and a challenge I’m having right now between a life of exploratory vs confirmatory science.

My whole life I have really enjoyed structure. When I’m learning something new, I like to try to shake out the underlying concepts then stretch them to their breaking points in an attempt to better understand the deep structure that governs the thing I’m learning about. This approach requires a good combination of both deduction and empiricism. I start with a reasonable postulate handed down by an authority, draw a timid analogy to some perhaps related spec of information, then take one end of my new meme and run as fast as I can in one direction, testing its elasticity and predictive power until it snaps and I am whiplashed back towards my starting point, whereupon I pick up a new rubber idea and run out in a new direction. This approach relies on reasoning and flashes of insight to get anywhere. I love explaining things, even when I’m terribly unsure. Ask me anything and I’ll probably give you an answer, even if my knowledge on the subject is extraordinarily limited. Instead, I treat such inquiries as invitations to ponder, to explore mental space, to simulate, make wild predictions, and arrive at some semblance of truth that is scarcely plausible.

This is what excited and frustrated me about psychology for so long. Whenever preparing talks or presentations, I would be slapped down for speculating about my results, for drawing deep inferences in a shallow pool of data. For me to be excited about my data, it had to make sense, even if the story that explained it was far-fetched or otherwise ill-supported. These games helped me to organize my knowledge in useful ways. Sure, I was gaining relatively little truth, but my new thoughts weren’t pure noise. Indeed, underneath the noise of the signal were¬†kernels¬†of truth, and indeed the noise itself told me something about my thinking. I found that this approach really behooved learning new knowledge. Even tenuous connections to far-flung data give more attachment points, keeping the idea from floating out of my head like a hot air balloon. It seemed that connections were what mattered, not whether they made sense. That would all get sorted out in time.

Then I started playing with and thinking about computational statistics. As computational power increases, our time saving truth-seeking heuristics are obviated. The deductive forces that guide and ration our precious mental resources are more harmful than helpful. Reading a lot of science fiction about a computational singularity, it seemed like our old ways of reasoning made less and less sense. Random walks through understanding started to be a lot more appealing. All of the cleverness, innovation, and gestalt insight that went into their optimization was made obsolete with the promise of the ability to brute force anything.

In my computational statistics class, we learned about Monte Carlo integration. Wikipedia can probably do a better job of explaining it than me, but I’m going to try. If you want to skip my attempt at explanation, I’ll blockquote it so you can just jump over it.

The idea is really neat and beautifully inverts the historic relationship between statistics and mathematics (especially probability). In statistics, when you want to predict the behavior of a random variable, you magically arrive at some of its fundamental properties, namely its probability density function (pdf). From that, you simply find the integral of its distribution multiplied by the function of the random variable you’re interested in the behavior of [take f(x)=x for an easy example], and voila, you have the expectation of that function of a random variable. This all works great on a chalk board, but in reality, integration is hard, even to approximate, when dealing with functions (typically from the pdf) that are at all unusual, and in practice, pdf’s of even vanilla random variables, like a standard normal random variable, can be unfathomably complicated (see below).

It’s enough to make you grimace. But why not turn that frown upside-down and the whole process while you’re at it. Let’s say you start with a really hard integral. What if you could rewrite it as the function of a random variable multiplied by the pdf of that random variable? Well, then its integral would be equal to the expectation of the random variable, which you can’t really know unless your approach has resulted in a boring random variable (in which case somebody else has probably already done everything interesting there is to do with it). But you can take a guess at its expectation. Let’s take another simple example. Imagine you want to integrate something unpleasant that easily falls apart as f(x) times that pdf that I put above. All you’d need to do is get some observations of a standard normal variable (you can get close if you just ask everybody around you their height then standardize the results), apply whatever function you wanted to each data point, then find the mean, which is not a terribly bad estimator of the expectation of that variable (and is typically the best “unbiased” estimator of the expectation of your random variable).

If you’re still reading and care what an unbiased estimator is, it’s simply an estimator where you expect to get the expectation. Crazy, huh? Some times, the best estimator is a biased one. Imagine you have some fair dice. They’re normal dice, except I’ve written “one billion” on the side where the one should go. Imagine you only get two rolls, and you have to figure out the average outcome of the dice rolls. You could do your two rolls, take the average, and call it a day. But what if in your two rolls you just so happened to not turn up “one billion.” Your estimate is going to be way off! What if on the other hand, you just decided, without even rolling, that your estimate will be “one billion.” Doesn’t seem very empirical, indeed, it seems like you’re biased¬†before you even started, but, except for on “the price is right,” the second version of you is probably going to be closer most of the time.

Qualms about true randomness aside, it’s not that hard to generate observations of a random variable. You can do it in Excel. Sure, the actual algorithm that produces it may not be perfectly random, but it’s pretty damn close. In order to get your initially difficult integral into something manageable, you may have had to make your function something ridiculous to leave a lame-ass PDF. But since you’re already fudging things anyway, why not just fudge the generation of the random variable? Maybe the PDF corresponds to a random variable you can’t do a reasonable job of simulating, so instead, you just simulate a random variable you do particularly enjoy, then do adjustments for how close what you generated is to what is reasonable under the original PDF. If your random variable of choice is not a good approximation of the real random variable, most of your observations are not going to be worth much. But…………………if you can get so so so so many observations that it doesn’t matter, you don’t have to spend much time being clever with choosing a good random variable.

Still with me? The moral of the above story is that the problems with Monte Carlo techniques that historically have been solved with cleverness can now be solved with brute force. If my computer is strong enough, I can bend math to its will. I can simulate anything. Let’s have a quick thought experiment. Give me the following:

  1. An immortal human. They can be mutable, but they need to be able to perform the below described task every 5 minutes for ever and ever.
  2. Infinite computing power
  3. A pen and paper

Every 5 minutes, the person pauses for a second, thinks up a 15 digit random number, and writes it down. This all gets fed into a computer. Let’s take a super inter-connected view of human cognition and assert that every fiber of your being affects every thing you do. Ergo, the numbers you choose are a reflection of exactly who you are at that point in time. However, there is likely another person, who is not the same as you, who, at some given point in time, might generate that same number. So, it’s not exactly one-to-one, is it? However, if you keep giving little flashes into who you are in the form of these numbers, you are going to create an infinitely complex pattern that truly is one to one. There is only one YOU that would generate this extremely long string of 15 digit numbers. Let’s be super unambitious and super unclever and try to come up with the algorithm that you’re fundamentally using to generate numbers. Let’s try to write your brain in visual basic, taking the chimpanzees on a type-writer approach.

Let a computer randomly write code, run it, and see what it gets. If it matches what you have produced so far, it’s pretty close. Once it starts predicting what you’re going to do in the future, it’s even closer. Sure, it’s going to take it a lot of tries to get it right, but don’t forget that you gave me infinite computing power in number 2. Let’s get more meta and let it also develop algorithms to evaluate if it’s getting closer or farther from being right rather than just stumbling around in the darkness. Let’s step back even further and let it write algorithms that evaluate those algorithms, AD NAUSEUM!

All of a sudden, it’s theoretically possible that it’s going to model not just your number generating algorithm, but you, and how is a perfect model of you any different than the real you? Let’s blow our minds even further and say that it can model the whole damn universe that led to your being created and agreeing to the stupid rules of this stupid thought experiment.


Now, while I may be able to find number 3, I’m not likely to come across 1 or 2 anytime soon, but it’s kind of creepy when pushed to the limits. All of a sudden, data is so much more important than any sort of clever insights we may make into it. I was initially terrified by the idea of this self-organizing computer, getting smarter at making itself smarter and running simulations of anything conceivable. I smirked to myself at presentations I attended while people tried to explain data using kitschy, home-brewed theories. Even perfectly reasonable ideas started to seem shaky. Why should people die when deprived of oxygen? That’s a handy notion, but there’s a far more complicated structure at play underneath, something whose structure is unfathomably complex and beyond our articulation. This reared its ugly head even moreso in psychology, where we do factor analysis and then come up with cute names for scales based on how the items feel like they hang together. Sure before we trust somebody to do this they have to spend years wading in the literature and learning what their predecessors have thought, but isn’t it all just alchemy as people simplify beautifully complex structure into feel good aphorisms that can be explained in a few sentences?

I was really bummed about the capacity of the human brain. Our little notions of the world were handy for keeping us alive, but ultimately didn’t even begin to scratch the surface of reality, but then, while walking about glumly, trying to wrestle with this problem semantically and deductively (which is kind of ironic I suppose) I came to some peace.

Our articulated knowledge is an attempt to express this more complex structure in some simplified rule, but our behavior doesn’t always follow our declarative knowledge. When you ask me to explain why something works the way it does, I’m going to give you the best estimator I can lazily produce, but it may be a pretty biased one. But ask me to bet money on what number is going to come up on the die, and suddenly I’m playing a more complex game.

This is something that goldfish can do but humans struggle with. If we’re flipping a coin, and I tell you I’ll give you a dollar every time you’re right, and take a dollar every time you’re wrong, even if I tell you that it is an unfair coin that is manufactured to come up heads 55% of the time and tails 45% of the time, you’re probably not going to adopt the best strategy, which would be to just trust me and call it heads every time. You may be able to explain to me the mathematical proof that argues for you behaving like that, but in your actions you refuse to believe that the system is that simple. You’re gathering additional data like the position of my hand, wind, speed, rotational intertia, and trying to somehow build this much more complicated model of how the coin really behaves. Because nothing is really that simple. Coins don’t follow the rules that we set for them. We don’t do the “right” thing in every situation, because in reality, that is unknowable! Sure, our strategy is sub-optimal if the rules really are that simple, but they aren’t. Our brains are doing all the self-organization, and meta organization that I feared computers could do. We’ve developed all of these meta meta meta algorithms that govern how we develop new algorithms, with clever little heuristics and razor, nifty principles like parsimony, not because they’re true, but because they (probably) guide us towards building a better internal model of the universe.

Because at the end of the day, that’s all we’re doing our whole lives. We’re taking in sensory data and trying to make sense of it, trying to create some sort of internal representation of the universe. I’m sure some law of thermodynamics or the uncertainty principle or some other vaguely invoked rule of physics would argue that something inside of a system can’t possibly make total sense of that system, but we can sure try. So I’ll keep telling my far-fetched stories about why things are the way they are but with the added wisdom that while I’m probably not right, that doesn’t mean the lies are without value.

Editor’s Note: This is really long and ungainly and I’m super impressed if you made it this far. After writing it I’m not even willing to reread it, at least not immediately, so rather than sit on this post like I usually do, I’m just going to truck it out, in all its ugliness, and pick at it and clean it up and spring board off of it in the future.

February 16, 2010 Posted by | Academic, Personal | , , , , , | 3 Comments

Twitter and Blogging

Twitter fits the way I think better than blogging. It lets me express my usually hyperventilating mind, each gasping thought shallow, rapid, and impermanent. It’s a real challenge to write a cohesive piece, but I think blogging may train me up to think in a different way than I naturally do.

The problem I have is that I’ll have an idea, start a blog entry about it, then either get distracted or have to go do something, but I carry on the conversation with myself that I started in the blog. I reach all kinds of resolution and gain insight, and by the time I go to write it up, I’m stuck with just the result rather than the journey, which is really the more interesting piece. I wish there was a way I could capture my subvocalizations so that I could actually log my thought process. I’m doing my best, but sometimes blogging feels like rewatching a movie with a tricky ending. I already know where it’s going, and it’s hard to take myself along for the ride again without tipping my hand as to the result.

Below is an excerpt of my “Buzz” conversation regarding my struggles to find a place for my thoughts to go. Does anybody else have a hard time figuring out what to say and where?

Feb 12 Daniel Kessler: Dammit new social media plan: Tumblr is just a big twitter, wordpress is a blog
Feb 12 Albert Yao: keep it simple, do everything within google!
Feb 12¬†Chris Love: Hey Daniel, do you really think Tumblr is just a big Twitter? I think the difference for Tumblr is that it’s a community formed on the basis of interests more than previous friendships and acquaintances (though as you know I’m now friends with Mills, Peter Santiago and a few other tumblers). What do you think?
Feb 12¬†Chris Love: But I don’t see any reason to continue with Facebook and Twitter though
Feb 12¬†Daniel Kessler: The real struggle I’m having right now is what to do with ideas. Chris, if you remember my writing style, my thinking style is quite similar. My thoughts are usually staccato bursts that don’t readily self-cogitate and unfurl. Twitter is excellent for this as it fits the way I already think.

In my continuing struggle to train myself to be a more meditative, fluid thinker, I’m trying to practice better blogging. So often I end up with 20 half-written drafts that were very interesting when started, but that I lose interest in over time.

For me it’s less about the community than it is about finding a repository for my thoughts. I think that you and your Tumblr-circle are all better trained thinkers than me, so you do an excellent job of having semi free-form, semi structured conversations. I’m still just trying to find a receptacle for my thoughts, and I want one storage device that fits what my brain spits out, and another that forces me to stretch and grow.

For me, WordPress or other blogging platforms are great for longer meditations and sharing, but don’t necessarily invite commentary since they can be intimidating and lengthy. Tumblr seems like a great place for reposting material that others have drafted, expanding on it, then kicking it out to the Tumblsphere for further criticism. I just need to figure out where it fits in my continuum of creative outlets.
Feb 12 Chris Love: Daniel, I think your entire post here belies your estimation of your own powers of thinking and expression. I too am trying to find a way to figure out what forms of web communication fit my time, moods, modes and methods best, but aiming at constantly morphing and moving targets seems to make this more a process of error than trial.

I think you’re right about Tumblr: it’s great as a sketchbook for one’s burgeoning thoughts and ideas, but it’s also a fantastic source for cogent bursts of information about political and cultural events. For example, Sea of Green’s live-blogging of the demonstrations in Iran is far more informative and useful than anything coming out of the mainstream press right now.

What do you think makes Twitter more useful (user-base aside) than Buzz? Man I’m confused these days.
Feb 12¬†Daniel Kessler: I appreciate your reassurances and take comfort that I’m not the only one struggling with that.

I’ve enjoyed Sea of Green’s stuff and will probably subscribe to it in Google Reader (thanks for so often reblogging, it’s kept me more up to date on Iran than I have been in a while).

The reason I’m still in Twitter is fairly simple and unexciting. I understand Twitter’s API well enough to piggy back on publicly available scripts, and there are enough Twitter “bots” listening to me that I can do all sorts of neat commands from a launcher app I run on my computers. If I have something I need to do, I can tweet a direct message to the “ToodleDo” bot which will make sure it gets added to my to do list.

For now, Buzz is a place where I’m happy to concatenate my stuff, and comment on it, but I’m not yet that interested in directly putting content here.

To be honest, Buzz is pushing me more towards wordpress because of its integration. If I could get content from Tumblr into buzz easily, I’d totally use that more. I just want a place that concatenates all of my activity so that it’s easy to see what I’ve been up to and thinking.

February 16, 2010 Posted by | Personal | , , , , , | Leave a comment

The Flickering Embers of your Soul

“It’s all we can do to look in the mirror and hope that the light doesn’t go out of our eyes.”
—Me, to nobody in particular

In an effort to actually publish more of my thoughts without just mind-dumping into Twitter, I’m going to start leaving fragments here that I can springboard off later.

Lately I’ve been thinking about the eternality of conscious being and have come up kind of pessimistic. I’m more convinced than ever that our conscious existence is just emergent phenomena, a beautiful property of our humming minds, but one that cannot exist on its own without substrate.

Recent experiences observing (in myself and others) the fragile emergent property that is consciousness crackle and weaken has really inverted my old top-down beliefs in will and self. The conscious mind is cool, but it is so easily disrupted and is miserably limited in its ability to command any resources for self-preservation under any but ideal circumstances. Bummer, huh? Well, let’s enjoy it while we can.

Nerdier posts coming soon, and maybe a splash of optimism in all the fatalism.

PS: I’m not depressed or bummed out or anything, I’m just slowly shifting my thinking about how I think of myself (or anyone else) as an agent. This new strain of thoughts has so far made me a lot more sympathetic to others and has helped me to feel more connected to other people: we’re all pretty frail.

February 14, 2010 Posted by | Personal | , , , , | Leave a comment

Being the Only White Person Around

I remember really liking this post when I read it a long time ago, but I wasn’t quite sure why. Something about the image felt really familiar. I was flipping through photos from China and found what I was looking for.

Click the first picture to jump to the full “Stuff White People Like” entry.

"Being the Only White Person Around"

From Stuff White People Like: "Being the Only White Person Around"


Daniel eating duck in Shanghai

Daniel eating duck in Shanghai

February 3, 2010 Posted by | Personal | , , , | Leave a comment

Statistics Woo!

I hope in my next life I get to come back as a statistician. I guess there’s nothing stopping me from doing that now, and maybe it’s just the Monster talking, but right now there seems to be nothing more interesting to me regression, link functions, and ordinal regression. There’s just this awesome beauty in using math backwards that requires creativity and insight. This really started to click for me when I learned about Monte Carlo integration. How clever to realize that, as computing power grew, instead of using integration to figure out how random variables would behave, we can instead use their simulated behavior to do integration! How many other mathematical things can we turn on their head? When will we use chemistry to understand addition, or physics to understand differentiation?

December 10, 2009 Posted by | Academic, Personal | , , , | Leave a comment