Imagine watching weird mega-serial episodes for 6hrs – back to back – no Ad-breaks! That’s how my fever delusions strike me. But these nightmares give interesting insights into my psyche..
- No story, no action in this one.. Just a video played in loop – forever. Chubby cheeks, dimple chin. . stop stop – rewind. Now read that with actions in that sing-song musical way. Picturization – on Lavanya!!! As the video keeps playing on & on, I writhe in sheer agony! 😦
Okay.. that one was a situational one, from all the baby & mommies singing rhymes overload over the past weekend. The ones that follow are true nightmares. Bone chilling!
- Ice-cream shop. Softy machine. Poor machine. With each scoop of vanilla softy ice-cream it doles out, it winces in pain & coughs.. Much like I wince from throat pain with each gulp of water I swallow. I empathize with the softy machine. I can feel & understand it’s pain. At the same time, I can’t go without ice-cream – that is survival food! So, I wait in line for my cone of vanilla softy dipped in hot chocolate sauce, crying for the softy machine as I wait. 😥 😥
True nightmare that!
Last vivid one – again a true nightmare..
- It’s a happy time for KP & QCA research with him. Finally.. finally .. I have good company. We have 4 new additions to the QCA group – all
crackpot charactersjust as mature, secure, content, (over-)confident, blindly passionate, wit & sarcasm loving, eccentric as me B-) . We have hours of fun – pulling each others legs, doing crazy stuff, building huge HUGE models of molecular QCA & clock distribution networks [don’t ask how HUGE stuff can mimic quantum tunnelling – such logic is quickly waved off in dreams], fighting about what molecules to use for p-QCA etc etc. In summary – bliss. Then come the demons from ARM. They hear out our research, make sad faces & say “All that is good.. but this will not come into being in your lifetimes. . not in at least 50yrs. This is inside information – 100% accurate”. [play sad violin, cello & piano tunes here]. All of us are suspended in disbelief – not even able to feel shattered in grief. Sob sob. Again, don’t ask why ARM of all people in the community would tell us that & how they’d know about QCA – suspend logic in dreams.
Stupid dreams. . but one striking thing about these most disturbing nightmares – my deep sub-conscious mind holds ice-cream & my research at the same priority levels!!
I am officially weird 😐
P.S. You ask why I am writing silly blog posts when I am down with 104deg fever?? Silly you. Use common sense. I can’t sleep, or read, or work – that’s why! 😛
My PhD in book titles…
From top to bottom :
- Freakish economics
- At times : eating, being droopy faced from over-eating, but loving it
- Born Free – to be as eccentric as you wish
- Becoming a delhi walla. . esply wrt food
- No explanation needed – the book title says it all
- Your life seems like a great golden sacrifice.. with as many side-tracks & masala stories as the Mahabharata
- At the end of it all, the whole thing feels worthy of being labeled ‘history’ & penning it down!
Warning : This is one hell of an arrogant & obnoxious post. If you feel like feeling offended, stop reading right now! (Did you notice that I did not even say ‘Please’ or ‘Kindly’?)
Myth#1 : A PhD is extended school “studies” – extended beyond tolerable limits!
Truth : I am NOT cramming books, writing suicide provoking answers in exams or submitting half-baked assignments every week here! And please, post-doc is NOT undergraduate courses extended to the limit! Anyone asking “For just how many more years do you intend to keep studying??” in spite of me politely explaining a dozen times that “I am not studying in the school sense, rather researching”, is asking to be pushed under the wheels of an auto-rickshaw at the very next opportune moment!
Myth#2 : Research is boring!
Truth : If it was, none of the researchers from Galileo to ‘Venki’ Ramakrishnan would have done what they did. Can you imagine life without electricity? Medicine? Aircrafts? Cars? Satellite TV? Computers? That beloved iPad? Or that inseparable mobile phone? If not, you better dare not call research boring!
Myth#3 : CS researchers sit at a dark corner of an attic/basement/garage 24×7 typing away to a computer that has a blinking screen with black background & green font.
Truth : We do sit typing away to a computer for long periods of time. But so do you. After all, YOU are the one at the other end of the chat session on google chat! 😛
Myth#4 : CS researchers write scary C++/Java code every waking minute. If you print the amount of code all CS researchers in this world churn out, you’d need to borrow cupboard space from Jupiter.
Truth : I don’t remember the last time I wrote 3000 lines of code. Actually, I do – it was an assignment that I did wrong. CS researchers – most of them, most often – think more, code less! And they do NOT remember every bit of syntax of every programming language and can NOT debug your code in less than 5 minutes!
Myth#5 : Feel the urge to hack into that frienemy’s GMail/Orkut account? Call that CS researcher (yes, the one swimming in caffeine in that dark corner of that attic)
Truth : Too late. You should have asked a teenage me – that was when I lost interest in hacking. Or please ask one of those “techies” around who blog only about how to get a piece of software to do what it was not designed to do! No offense to “techies”. . but please spare me these hacking requests. Hacking no longer arouses me!
Myth#6 : Need an antivirus software? Or need to purge your laptop of the zillion viruses that you have managed to attract? Call that jobless CS PhD student.
Truth : That jobless CS PhD student might just be so outdated with anything that concerns Microsoft Windows that she now fumbles to navigate the Office 2007 interface! Shocking, but true. Sad, but true. Come to terms with it. Or delete my contact details from your phone. Please. Right away.
Myth#7 : Researchers read loads of books. All these books have complicated scary formulae printed 5 per page. Or it could be huge diagrams or graphs or rows and rows of numbers with 8 digits after the decimal point. Apart from hundreds of pages of text that sounds swahili, that is.
Truth : Neither Ashok Banker’s Ramayana nor “How To Rule The World: A Handbook For The Aspiring Dictator” has any of those. Sorry to disappoint you.
Myth#8 : That PhD student is busy at all times of the day because she is busy attending classes. After all, there would be more course work for higher ‘degree’s.
Truth : Actually, please continue believing that. Saves me a lot of pointless talk which generally involves countering the questions “When would you finish that PhD?”, “When would you come over to meet us?” and “When do you plan to get married?” with subtly sarcastic replies which you wouldn’t get most of the time!
When she talks of the self-doubt a creative person is almost always filled with, it strikes a chord.
When she talks of the ancient Roman way of thinking of “genius” as a magical divine entity who invisibly helps the artist and shapes the out come of the work – a brilliant psychological construct to protect an artist from both the plaguing self-doubt that comes with not-so-good results and the dangerous narcissism that comes with very-good results – it strikes a chord.
When she talks of the poem coming thud thud thud to the poet and her missing it if she doesn’t reach a paper & pen in time to catch the poem & transcribe it, it strikes a chord.
And when she finally concludes that it is best for a creative person to dissociate himself/herself from the results and just concentrate on giving his/her 100% to the creative process, it strikes a chord.
Now, me saying that would be laughed/frowned upon, given that I am this “geek” or “nerd” (or both) who calls herself a “researcher”, a “computer scientist” and spends insane number of hours in front of a computer in a depressing lab. “Where is creativity or art in that?” you ask?
Donald Knuth says “Computer programming is an art form, like the creation of poetry or music.”. By extension, CS research – with the necessity and opportunity to be able to “shift levels of abstraction, from low level to high level. To see something in the small and to see something in the large” (again in Knuth’s words) – is a higher art form. And when Knuth, one of the Gods of CS, says such things implying that CS is one of the greatest creative art forms, who the hell are you to refute, huh?
So that’s settled – I am involved in creative stuff, and hence all that Liz said in the talk applies to me. Matter settled.
But do ideas really “come” to me? Well, I’d like to think so. . How else can I explain the idea/solution occurring to me first and the “basis for that idea/solution” becoming clear to me only after a couple of weeks of analysis? As Leslie Lamport puts it, “When I look back on my work, most of it seems like dumb luck – I happened to be looking at the right problem, at the right time, having the right background.”
Am I filled with self-doubt? Well, only sometimes, luckily. At times when the “looking at the right problem, at right time, having the right background” combination acts all elusive. At times when faced with the skeptical “You are working on that? Just how practical or real-life affecting a problem is that?”. At times when thrown that sarcastic “Oh, so you are working for the benefit of the generation to come two decades hence. Good”.
But that said, the beauty of the whole thing is in those “Eureka moments” – when you have given your 100% to groping (almost) blindly in the dark for the right problem or its solution and happen to see nice results.
So, as per Liz’s suggestion, I am thinking of having this notion of a “genius” fairy that “gives” me ideas.. A magical creature whose thought would keep me grounded when I do see good results from my efforts. . A divine creature who would take the blame when “my” ideas fail, at whom I can shout at for not “passing along” good ideas.
Sounds fun. I just need a name for my “genius” fairy. Any suggestions? 😉
WARNING : Long post ahead! May not make much sense to laymen (yep, we super smug researchers like to call the rest of the world that).
A question that I am asked (too) often :
“When the hell are you going to complete that PhD of yours?”
That’s a question which has been eliciting different non-chalant replies – “God Knos”, “Come on.. I’ve just started & talk about completion?!”, “Is that of any concern to the general good of the universe?”, “When my supervisor can tolerate me no more & kicks me out”.
But when a sweet Prof casually asked “So, when do you intend to start writing your thesis?” in the lift, I blinked!
I blinked, blinked again & finally blurted out “End of the year” & quickly add “that’s what supervisor wants”.
He smiled & asked, “When do you want to?”
I blinked, blurted out something, continued to blink & was left blinking while he left the lift on his floor.
Tadaa. . one of those “eye-opener” moments. . The only problem being that the opened eyes refused to close for a while, even to blink!
So, one question that I have been asking myself since :
“When am I ready to ‘complete’ my PhD?”
A check-list :
1. Have I learnt to really appreciate research?
Around 3yrs back, when I was asked in the PhD admission interview “Why do you want to do a PhD in CS?”, my answer was “Because I am passionate about Computer Science research”.
Today, my answer would be “Because I am passionate about research, and CS seems to be my pet research area”.
Thanks to an advisor who was & still is more interested in pushing me to think more about the exploit-the-physics part of my current research topic, I today find myself enamoured by research problems in any field. There is that zoom out effect – from focusing on just CS problems to finding problems everywhere and finding beauty in them.
So, yes, I have learnt to really appreciate research in a wholesome sense.
Point 1 – Check.
2. Have I learnt how to keep alive that passion for research & rise above disillusionment?
Enter PhD with this utopian expectation of “next few years dedicated to research, problem solving, excitement”
One year into it : You feel “Oh, it’s not all green grass here”.
Finish course work & plunge into “pure research” : You know “It’s NOT green grass here”
And it comes with the freebies – crazy deadlines, TA work, work that seems to pile up in burst mode, frustrating duties as a TA , seeing the dynamics of the proferssors’ lives – not really free of monotony, boredom or absurdities, suicide-prompting ‘duties’ as a TA (I know, that’s an intentional repetition), etc etc etc.
Worst of all : Days, and sometimes weeks pass without an iota of research happening. Either for the lack of time or for the lack of mood/peace of mind. Or both. That hits at the very foundation of the “Oh, research in academia is intellectual bliss” notion.
The next time you get that stretch of few hours sans disturbance when you can really find/attack a problem at peace, all the disillusionment gets undone.
The trick seems to be keeping this cycle’s period as short as possible. By hook or crook.
For instance, going AWOL & being nocturnal for a week, staying away from lab and allowing yourself to get obsessed with a problem – not necessarily urgent/useful towards your thesis seems to work. Wonderfully.
So, yes, I can negate disillusionment & re-spark that passion for research whenever necessary.
Point 2 – Check.
3. Have I learn to come out of utopia when necessary & be practical in research?
Being a person who is prone to getting obsessed with one problem, however unimportant it may seem, I sometimes need to be shaken awake to be able to see the bigger picture.
Has that been done? Yes, quite often.
Would it need to be done by an external force in future? Perhaps, but not much.
I have learnt to “let go” of a problem when it doesn’t seem to lead anywhere for a while.
Point 3 – Check.
4. Have I learnt the importance of going back to fight with I had “let go” long back?
Whenever I find myself saying “Yes, that is an interesting idea, but I explored it 2 yrs back and it did not work out”, I almost always find myself getting more insights of the issue/solving the issue when I go back to the problem again. Don’t know if I get wiser with age or the problem gets easier with time! Guess the problem just gets marinated long enough at the back of my mind & becomes soft enough to bite through.
A lesson learnt long back, and reinforced recently.
I should start a “failed ideas” diary & note down all the ideas I “let go” of & flip through it once a year or so.
Point 4 – Check.
5. <More such “have I learnt”s about attitude towards research, which are too many to list>
Point 5 – Check.
6. Have I had time to think, introspect, and “take it slow” that I now believe that I would not feel that my life as a PhD student was ‘incomplete’?
Absolutely. Had the luxury of a supervisor making the mistake of saying, once, “Take time, learn. There’s no hurry to produce results for a while.” and exploited that statement to the fullest.
The “a while” got extended to “quite a while” to “a long time”, but in the process, learnt to enjoy life inclusive of “work”. Learnt to explore & be adventurous – both in research and otherwise. Learnt not to be a workaholic. I know, that’s debatable according to ppl who know me, but please believe me, I am not a workaholic. 😉
Point 6 – Check.
7. Have I learnt to face the necessary but boring evils in research without sulking (much) – namely writing papers, preparing presentations (conforming to certain ‘rules’) etc?
I think so. Yet to be confirmed.
Point 7 – Half check.
8. Do I have the ‘results’ to flaunt & demand that tag of “Doctor of Philosophy”?
Not yet. But I have a feeling that’s the easier part, once the rest is covered.
Point 8 – Pending.
Wow. . More than 81% (tentative) coverage. Good going. [Pats her back]
Tentative – cos there might be (should be) more metrics to judge how “ready” one is to complete a PhD.
If I want to boost that coverage number (as a true blue engineer), I would have to add the following points to the check-list :
- Do I want to eat real food, without having to go through the trouble of going to restaurants? YES. Check.
- Do I badly, badly want a washing machine & instant geyser of my own? YES. Check.
- Do I want a scooty? YES. Check.
- Am I fed up with that part of TA work which needs me to answer questions like “If I change that file, and something goes wrong, then?? [scandalized look]”, but not with “The world will come to an end. Please don’t trigger an apocalypse”? – YES. Check
That pulls up the coverage to 87.5%. Awesome.
Any additions? Especially to the non-bonus point in the check-list?
- Paper presentations – mostly boring
- Coffee – with stupid non-dairy creamer if the conference is abroad
- Looking for that precious seat which hides you when you nod off from the poor speaker on the stage
- Someone beating you to that seat & you ending up in the first row, struggling to keep eyes open
- Sometimes, very rarely, catching up with a dear friend, gossiping, giggling, having fun while folks around are busy talking power, performance and defects
- Keynotes speeches – may or may not awaken you
- Boring lunches if you have no one to talk to
- A registration kit – a bag, some small souvenir, a notepad, a pen, and.. a CD of the proceedings of the conference that you are never even pop into a CD drive
- Travel, exploration, experience of a new place – the best part of it
- In case of out of country travel, applying for travel grants & getting reimbursement -Arrrrgh, the most irritating part of it!
- Sleepy, tired, bored people around who have the same thought as you – “When would this end? When can I get some fresh air or some sleep or both?”
Well, it should have been, and I’d have preferred it to have been..
- Idea or Findings presentations – kicking off new ideas
- Keynote speeches – the inspiring ones of the kind that leave you with a sincere, silly grin and ideas to contemplate
- Meeting interesting people who are infectiously enthusiastic about stuff being discussed
- Lunch and Coffee breaks turning into impromptu brain-storming sessions
- That proceedings CD which you’d want to look up for details of interesting ideas/findings that you’ve just heard of
- Panel discussions, open debates, mass brainstorming sessions
- Travel, exploration, experience of a new place – an added bonus
- Inspiration, mental refreshment & enthusiasm to take back home
- The want to attend the conference the next year and every year that follows
Now, why the hell doesn’t that happen?
When a V.S.Ramachandran can make neuroscience so engaging and connect with a non-scientific audience, why can’t a CS researcher connect with a CS research audience? Forget about other speakers, why can’t even the so called keynote speakers make an impact? Why are you subjected to keynotes where the speaker doesn’t even recognize what’s there on the next slide and asks his colleague to help him with it?
When as the researcher, I find the work in my paper exciting, when the Program Committee has found it valuable enough, why can’t I hold audience’s attention for a few minutes and share the excitement?
Is it that CS is too bland, boring a subject to inspire? Is it too intimate a field of research to be able to spread the joy in the community?
I guess it’s just that we have been taking conferences too lightly .. We have been taking them as an alternative to journals – fast turn-around time, fewer pages to write, less rigour needed and voila, you can take a few days off on the pretext of attending!
I guess it’s just that there has come about this unwritten rule that “It does not really matter if your talk is as insipid as mud, you’ve got a paper published. Chill out!”.
I guess it’s just that no one really cares any longer if they enjoy the technical part of the conference. It’s about networking – be seen at the right places, be seen among the right people, be seen often, people should recall you as that guy from that institute whom we met in that conference, that conference and the other conference.
I guess it’s just that hosting a conference has become an issue of pride and has got nothing to do with how much purpose the whole drama of organizing it over months – involving a couple 100 people, a hall full of people from the world over travelling to the your place, all the food, money & effort – serves!
And as long as speakers treat conference presentations as summarizing their papers – abstract, objective, literature survey, idea, result…. As long as the audience treats presentations as a replacement to reading the papers. . As long as people attend conferences just because their accepted paper would get published only if they do. . As long as CS folks stick to rigour, completeness and formal language even in presentations. . things are not going to change.
And why do I write this now?
Because I just realized that the only long lasting thing that I have gained from the last conference I attended (in China) is a taste for jasmine tea!!
Disclaimer: (Those who know how crazy I can be or are crazy enough, skip this part)
For diplomatic & precautionary reasons, the following obvious facts about this post :
- Purely imaginary stuff NOT to be taken seriously.
- This “paper” does not reflect how we do research. We are quite serious folks when it comes to research!
- This “paper” does not reflect how I write papers. I generally write up stuff I believe in and actually know quite a bit about what I am proposing/claiming/referring to.
- This opinion is entirely mine and not supported or even considered as an opinion by the institute I belong to. (Just in case some weirdo actually thinks institutes can have such crazy opinions! 😛 )
- This post is not to be immaturely interpreted to mean that I am outrageously jobless!
Okay, that’s enough of sensible(?) stuff. On to a sample of the “revolutionary” writing style I propose for the research community to adopt to make papers more interesting less monotonous..
Qutb Architecture for QBA
Me & My Advisors*
Cool Institute, Hot City
* – it is for a blind review process. .Guess who we are. .Tee Hee 😀
Abstract We all know about this super-hot technology called QBA and the world is waiting to dive in and reap its benefits ASAP. In this paper, we propose an amazing architecture for QBA which would revolutionize the way we live life. It is so good.. We can’t describe it in the abstract. So, do read on!
So, this QBA technology is one amazing thing that has come up in recent years. It is a promising concept and has the potential to really good things to the VLSI industry. If you haven’t heard of it already (most probably you haven’t), go read up what its creators have to say in these papers . I can’t be writing up all the basic stuff here with the miserly conference guys giving me only 2 pages to say my stuff.
For those who read only the Abstract, Introduction and Conclusion sections and expect to appreciate the research in the paper, all we have to say is this. We were very much disappointed by all existing works which propose architectures for QBA and decided to throw in our bit to the literature out there. So, what we propose here is a novel, mint-fresh architecture which is efficient, compact, low power, low cost, like nothing before in history. We call it the Qutb Architecture because – a) architecture, Qutb – see the link? b) the Qutb happens to be the first author’s favorite structure around here c) Qutb & QBA alliterate 😉 . Now that you have your reason to go ahead and read the rest of the paper, you better read up the rest of the prose.
II. QBA Architectures in Literature
We do realize that a lot of people have tried to come up with architectures for QBA. But, unfortunately, sadly, disappointingly, we do not really like any of them. What X et al. propose is too simple. Y’s idea is nothing new.. it’s just old beer in new, shiny can. The stuff P & Q are taking about appears good but has its own problems. As far as the architecture A, B & C propose, its too complicated. Frankly, I can not understand what they are doing in that basic block. Other ideas that have been talked of are too lame for our tastes.
Well. . I was supposed to write up a survey paper thingy & was reading up all those papers mentioned in Section II. The bulb glowed at that point to do something new which will overcome all those problems. Gives the world an all-goody-goody (like maryadha purushottam Ram) architecture for QBA and gives me an excuse not to write up the survey paper for the moment. Smart na? B-)
IV. Our Qutb Architecture
So, the thing we propose is this. To escape the problem of blah#1, we exploit the unique blah#2 property of the QBA fabric. Next, we do this & add that, keeping it like this so that problem blah#3 does not surface. With those handled, feature blah#4 is incorporated so as to make the architecture very efficient wrt. both area and timing. Figure 1 illustrates the design of this stuff. Note that the whole thing is so simple, it is actually cute.
So that is broadly how our thing works. I can not really explain it better here (only 2 pages remember? 😦 ). If you are really interested in understanding it (that’d be so wow!), feel free to drop an e-mail to me. . We can discuss the beauty of this concept over coffee or on skype 🙂 .
As mentioned earlier, our architecture is amazing, the others’ are not. Simple. Ours performs better in whatever experiments we conducted (well, in most of them). And we sincerely, passionately, feverishly believe & hope that our architecture is better in may other ways that the experiments do not prove too. Please trust our intuition on this. Anyway it is so cute, there is really no reason you would go for those ugly architectures above overs.
Being as humble as we are, we would like to point out the small issues in our beloved Qutb architecture too. It is not as cute as we would like it to be. But don’t worry, we are working on it!
(Reader is requested to read Abstract, 2nd paragraph of Introduction and 1st paragraph of Analysis sections again)
Further work is however required to statistically prove that this Qutb architecture for QBA is better in ways blah#5, blah#6, blah#7 and blah#8 than existing architectures. I do not have the patience to do all that at the moment. In any case, if I were to work on getting all those data, I would be ready to present my PhD thesis any moment. .which I am not. So, hold on till I do or try to do it yourself. Cya.
No space left for references. You do not need it anyway. . 1) You are not going to actually look up the references. 2) You have Google! 😐 3) I am not citing any of my or my advisors’ papers anyway.