You don’t know me — well, I don’t think you do, anyway — but TidBITS publisher Adam Engst asked me tell you a little about myself. It’s not that my life is so particularly interesting, but that as a fourth-year computer science major at the University of Virginia, I have a particular perspective on the technology world that Adam thought might be of interest to those whose college days are long past.
Like many others, I entered college from a high school where students were rewarded with good grades for memorizing and regurgitating the correct answers. I was great at that and always received above-average grades. I learned what was required from textbooks and was tested on it. There was no guessing as to what resources or subject matter to study. Alas, while my grades may have helped me get into college, high school otherwise didn’t do much to prepare me for the University of Virginia.
That’s particularly true of computer science at the college level, which is very much the opposite of regurgitating memorized facts. Professors never bother to make sure the class knows a particular technology before handing out an assignment. This came as a shock to me initially, but after years of working through these assignments, I’ve come to understand that teaching myself is perhaps the most important thing college has taught me. It was, and still is, an entirely different experience from high school. I haven’t always liked needing to do a substantial amount of independent learning before even beginning some homework, but I now understand that it is a valuable skill. Being able to learn on one’s own is necessary to succeed
in the real world because I’ll never know everything I need to and will constantly have to learn on the job.
I actually started as a business major but quickly switched to computer science because I wanted to make the tools people use and not just be a consumer. At the time, I believed that there are only two types of people in the world: those who make the tools and those who use them. I would later learn that using any modern programming language requires relying on code, foundations, and tools that many others have created.
Programming from scratch just isn’t necessary anymore, and in most situations, it’s not even possible. From libraries to frameworks to development tools, programmers regularly share their work so others can use it and improve upon it for the betterment of everyone. In my naïveté, I also failed to realize just how many layers there are in a modern computer system. When I learned about how compilers and lower level languages worked, I realized that any code I write in a high-level language is translated into various other forms before it can be finally understood by the computer. Although I question the practicality of learning assembly languages, doing so provided some valuable perspective on just how amazing and complex the inner
operation of our computers really is.
Needing to build on the work of others and the necessity of independent learning has raised numerous tricky issues for me and my fellow CS students. The most significant one revolves around the fact that the University of Virginia has one of the oldest and strictest honor codes in the nation. There is a single-sanction policy, meaning that anyone caught lying, cheating, or stealing is automatically and unequivocally expelled from the university with no second chance or opportunity to return.
So what do you do when your instructor tells you to consult “Professor Google” for how to program in a particular language? And what’s the policy on using code from Web sites? How about example code from textbooks? Asking for help on StackOverflow?
With so much on the line, we CS students live in constant fear of breaking the honor code when trying to build upon externally available examples. But at the same time, using what others have done, whether it is a tutorial, framework, library, or other piece of open source software, is both standard practice in the industry and fundamental to the advancement of the field. I wish there was a single, solid answer for how freely available code can be used in CS coursework, but in my experience it varies greatly from professor to professor.
One of my biggest reasons for majoring in computer science is that I wanted to understand how computers work. I love TV shows like “How It’s Made,” “Unwrapped,” and other programs that show how everyday items, no matter how mundane, are manufactured. I’m also fascinated by cooking shows because of how the TV chefs can so quickly and effortlessly combine different ingredients to make something tasty (and yes, I realize that much of it is the magic of television). When I was young I would always take things apart, and ever since I can remember I’ve wanted to know how computers work. Even now, after years of computer science classes, sending an email or viewing a Web page still amazes me, when you think about all the
technology that has to come together for these common tasks to happen in the fraction of a second.
Ironically, one of the key things I’ve realized over my years in college has been that I like the Internet more than I like computers themselves. I used to group the two together, since computers were the only way to access the Internet. But with the rise of smartphones, tablets, and innumerable other Internet-enabled devices, I’ve come to realize that what I always loved about computers was the connectedness of the Internet. When I’m offline, the computer feels radically less useful, since I can’t communicate with others and access the wealth of information freely available on the Internet. If the University of Virginia had offered courses or even a degree program in “Internet Science,” I would have studied that.
I also chose to major in CS because I wanted tangible skills. When you have Google and Wikipedia in your pocket, simply knowing something doesn’t mean what it used to. While creating my résumé, it was rewarding to be able to list the variety of programming languages and technologies with which I was familiar. Students in majors that emphasize soft skills don’t have the same luxury and probably rely more heavily on making a good personal impression, if they can even get in the door.
Like thousands of other college students last fall, I attended one of my university’s career fairs, filled with employers answering questions from prospective hires and collecting résumés. I focused on the engineering side of the career fair, but I was surprised — though not unpleasantly — to discover that “engineering” seemed to mean “software development.”
While talking to representatives from different companies, I quickly noticed that the same locations kept coming up time after time. Job prospects for engineering students who are willing to relocate to places such as Silicon Valley, New York City, Seattle, and Washington, D.C. are quite good. Northern Virginia, where I grew up, is the Silicon Valley for government contractors and is also a hotbed for IT jobs. I got my first “real” job as an intern at one of the largest government contractors in the nation, and I’m grateful for having grown up in an area that seems so far to be largely unaffected by the national economic downturn.
Between my major and my internships, I was able to schedule interviews at most of the companies I wanted. I learned the hard way that getting interviews and doing well at them are two completely different things. During my interviews, I have been asked programming questions, software development questions, logic puzzles, riddles, geometric reasoning questions, and database questions. Once I was even required to take an IQ test! I found myself wishing that I could just pass some software development test so I wouldn’t have to jump through the interview hoops repeatedly. I don’t know if any interview question can be a good indicator for how someone will perform at a particular job — does solving a brainteaser really show that I
could work through a knotty programming problem? But I do believe that excellence is a habit and that hard work can more than make up for not being a genius. In my opinion, something like a GPA that is a cumulative score for years of hard work, plus references from professors who are familiar with what I’ve done, would be a better indicator of what I can do than the answer to any one question.
Luckily, I was able to get through the interview process, and I’ve accepted a job offer to work as a data analyst at a digital market research firm after I graduate in May 2013. Why not as a developer? I did apply for many software development positions, but ultimately none of those for which I interviewed appealed. Part of that stems from the fact that two of my internships were at software development firms, and I decided at the end of my last one that I didn’t want to be a full-time developer if I could avoid it. I’ve realized that although I do love computers, I don’t love programming. The college courses I enjoyed the most were the ones in which I have learned how things work, not those where I had to build things. I
However, being a data analyst is genuinely interesting to me, and I’m not just saying so because that’s the job offer I accepted. I see being an analyst like being a critic, but with data supporting your opinions. Essentially, an analyst is hired to look at data, think about it, and draw conclusions. I’ve been doing this for years on my own for my stock-picking hobby, using publicly available information to research potential investments. Also, recent events have led me to believe that data analysts are only going to become more important in the future. The poster child for the field is Nate Silver, the political blogger at the New York Times, who used data analysis to predict the results of the 2012 elections with a high degree
I don’t anticipate doing much programming, other than writing some bash scripts or SQL queries, but even still, majoring in computer science enabled me to land my ideal job. The company size, location, and other mundane aspects weighed into my decision, too, and I’m both confident that it’s the right position for me and looking forward to this next challenge.