macro trends | technologies impact on marketing – consumer, brand and agency space

recently, Ruchir Pande and I delivered this prez at MICA.

Advertisements

are you designing your career?

This isn’t a post about your passion, following your dreams or what have you. I think there is better material available out there. These are few lines about my observation in our industry – IT industry as you know it. Most of us who work for an ever evolving organisation or come from a background where everything revolves around ambition of getting to next level, important role and probably an onsite opportunity … there is a lot that rides on our choices (on the way) between those years spent at an organisation.

My question is – do we make choices when making important career decisions? 

You may compare jobs/roles when we are joining a new organisation (or may be note) but what about the career choices when you have to grow within an organisation? Do you ask the right questions like where does a particular role take me? whether the choice of work takes you to a predefined outcome?

Truth is, most of us (including me) have made decisions that revolve around – What’s best for the organization

Forget about making conscious choices, do you (if you are reading it thus far) consciously making a choice of where your career needs to go ? the path, the direction, the outcome – I call it “Designing” your career.

Do you design your career or do you go with the flow?

Mentoring/Coaching/Growth conversation in organisations revolve around outcomes that look like rewards (like i mentioned above – promotion). Mostly these conversations aren’t focused on expertise, long term goals and most importantly link back to career choices that are being made. As long as the role benefits the organisation it is good role and good reward. I don’t intend to be a cultural change agent here – I don’t even intend to call out whats wrong etc. etc. in fact, I would argue that none of this is happening on purpose. It is happening because individuals (You and Me) aren’t making conscious choices that link back to our career aspirations (not the rewards). It’s us who need to rethink our priorities, rethink what career means, rethink the outcomes that are linked to expertise and have a “Design” of our own that leads to a possible certain outcome in terms of long term career planning.

I believe we all stand somewhere in these three boxes – its a matter of identifying that and (may be) work on moving towards the right side (if that makes sense). I don’t think there is anything wrong in being in any of the boxes but its important to be aware where you belong.

career-design

I have seen many folks struggle because they make choices assuming someone else (mostly their seniors) are making the right decisions for them, or make choices that have no thinking put into it.

Do you have a design for your career?

Does the opportunity fit into that design?

You have to own your career – best, you have to design your career. No one else, but you have to take the accountability for defining what do you want?

TDD and effective unit testing | from an architect’s mind

Having practiced TDD and also having tried the “unholy” path of not unit testing my code as part of SDLC, I have experienced varying emotions attached to TDD, few think it’s the industry de-facto and hence we should have it, some just follow the “standards” and some think process warrants it but very few understand the big picture.

I am trying to pen “often overlooked” aspects of decision making, while advocating TDD in a project. I AM NOT PROPOSING A NEW PROCESS OR A NEW APPROACH to software development.

Let me clarify and set the context, am not trying to be unit testing critique here, am trying to pen down some interesting observations and aspects, which I believe will make our decision making more pragmatic while planning/estimating and implementing TDD.

Conceptual understanding or lack of it
TDD == Write code to pass a set of test cases. Test cases here would be unit test cases.

Conceptually unit testing is simple; you test a piece of code in isolation. Assume you have a multi layer logical architecture, and you are developing unit test cases for your service layer, you would need to isolate your service layer from rest of the layers (at higher level). At the lower level, you would need to also isolate the method/logical unit of service layer from rest of the classes or dependencies.

I think I need not explain the positives of TDD; there is enough information available on internet. However, I want to put the wholesome picture of where TDD fits the bigger piece of puzzle in software development.

What I found missing is the big picture understanding of why TDD, which means, if I had a QA strategy in place and I had a goal for quality in sight, TDD should play it’s part in it vs. TDD will ensure quality thought process. Let me clarify with a little more information:

If I had testing strategy in place for quality check, I would want to check how I cover entire application based on the goal of strategy. So, if I decide to only follow TDD or unit testing as QA strategy, my questions should be:

What is the coverage I get? Let’s say I get 30% (higher side) coverage, how do I cover other areas of application?

I might want to look at integration testing as an option.
I might also want to look at having data driven test cases. I might also want to have navigation testing.
I might also want to look at having an automated

regression testing suite.

Having weighed my options above, I would want to look at the ROI of each of options or combination of options in terms of:

What is the skill availability e.g. automation test suite would require specific skill set.

What is the maintenance cost of each of the options e.g. variation in functional design would require change in entire set of test cases (of any programmatic form)?

Infrastructure and processes in place, e.g. how do you tie CI engine with automated regression suite? Whether you have build verification suite in place, which will mark a build green vs. red etc.?

Effective unit testing
If I establish that I would definitely make use of unit testing in my development process (more often than not this is the case), I would (as a second step) want to ensure that my team understands and follows a standard of figuring out important test cases.

Identifying a test case:
Most of the time we only look at the programmatic aspect of writing a java unit test case and overlook the “method” in arriving at a test case. Mostly QA is equipped to identify “good” test cases but developers would find it hard – or at least wouldn’t be experts at doing so and hence there is a chance that the inventory of test cases (unit/programmatic) may not represent the most effective set of test cases. So it is imperative and important to ramp developers up on the process/methodology of identifying the “effective” test cases.

Revisiting the “identifying techniques”
Like I mentioned above, one of the important aspect of TDD is to identify meaningful and effective (subjective) test cases. Here is a list of some of the techniques which could/should be used to identify various scenarios:

  • –  Equivalence class partitioning
  • –  Boundary Value Analysis
  • –  Invalid Inputs
  • –  Special Inputs (uncommon)My recommendation would be to come up with a standard set of rules and guidelines to determine test cases.

    Most often we do not differentiate between WBX and BBX unit test cases, my recommendation would be to focus on white box test cases (WBX) along with black box unit testing (BBX). In most cases/scenarios, where you have other testing methods integrated in overall testing strategy as BBX scenarios overlap with other options like automated regression or navigation test or functional test or during manual testing. This means that you have coverage for critical piece of code at the level you want (subjective).

    When to write BBX (Data driven or input/output driven)
    In using this approach, the tester views the program as a black box and is not concerned about the internal behavior and structure of the program. You derive to a BBX unit test case from the contract itself, which means in the java world, you can write all your unit tests by just having an interface to work with.

    When to write WBX/Structural (Logic driven)
    Using this strategy, the tester derives test data from an examination of the program’s logic and structure. You would write WBX once the implementation for a given “function” is complete, the driver to “unit test” the code would be the complexity of the code. E.g. if you have many alternate flows in the code, this is one candidate of writing WBX unit tests.

    How does it fit in SDLC?

    Assume a Java based project implementation; following is an ordered listing of the steps to be followed in the implementation phase of the entire Software Development Lifecycle (a little detailed and only suggestive e.g. every project may not warrant a BBX design document a good Javadoc could do the job).

    ………………………….

    6) Contract Implementation – Based on the Technical Design Document, write the public classes and interfaces representing the public contract implementation.

    7) Black-Box Unit Tests Design – Using the Functional Design Specification Documents and the contract implementation, design the unit tests for the functionality of classes as seen as black-boxes; this step should result in a Black Box Design Document.

    8) Black-Box Unit Tests Implementation – Implement the unit tests according to the Black Box Design Document.

    9) Black-Box Unit Tests Code Review – Assures that testing guidelines and coding standards have been followed; results in an Inspection Report.

10) Code Implementation – Implement the actual code using a test-driven development approach – code is written to pass the black-box unit tests.

11) Black-Box Unit Tests Execution – This step intermingles with the previous one in an iterative effort to implement functionality while keeping in mind the precise goal of passing all the black-box unit tests.

12) Code Review – After having implemented the functionality, the Design Inspector reviews the code implementation and proposes – if he considered necessary – where the implementation should be thoroughly tested using white-box testing.

13) White-Box Unit Tests Design – Design unit test cases (where required) to test the implementation from a structural perspective. This should result in a WhiteBox Design Document containing a list of methods to be tested.

14) White-Box Unit Tests Implementation – Implement the white-box test cases.

15) White-Box Unit Tests Code Review – Assures that coding standards have been followed and testing goals have been achieved; results in an Inspection Report.

16) White-Box Unit Tests Execution – Run the tests to thoroughly verify the implementation.

  ..........................................

Infra … what’s in a role?
How is Infra or a role relevant to TDD? Why are we even talking about it?

  1. CI/Build engine
  2. B. Build manager

One of the common mistakes we make is to ignore the setup to execute builds and integrate checks and balances for TDD (and many more such practices). There is also need to identify a build manager (representing a role, who owns build related practices), who would make sure builds are tied to TDD completely and not partially. E.g. how project is designed (probably a function acting as input to build setup) and how dependencies are tracked in a modular setup to how a build is produced, released, deployed and tested (build certification to regression testing). Probably this topic deserved its own piece of write up for the shear subjectiveness it carries.

Involvement by Ramp up/Training

One of the biggest road blocks in implementing TDD is low level/detailed interest in developers to adopt TDD and its benefits. Often, unit testing as a terminology is used to describe any testing done by developers whether integration testing or functional testing, which in my mind, dilutes the whole concept and confuses new developer’s understanding. There is a greater need to educate developers/testers to identify new and effective tools, this will ensure there is required comfort level and expertise for them to participate in whatever strategy/process we arrive at.

I would recommend each project to probably block some time in planning a ramp up/training for the sake of fulfilling the objective of quality deliverable by greater participation of the developers.

case for an alternative strategy

I believe, for some projects programmatic testing and not necessarily unit testing is an effective tool (probably as effective as unit testing) in development cycle to find and fix defects early in the process. Assume a project which is more or less customization of a tool/product and required less amount of coding in comparison to an implementation from a scratch on top of a custom “tech stack”. I would probably look at using BBX test cases at a very high level, at integration/flow testing level to check quality and cost. Open source tools like WATIJ or WATIR are good candidates for such a project to follow Test based development (not necessarily test driven).

I think what we need to understand and realize is what would save us “cost” at the same time result in “quality” output/deliverable. In my experience, I have observed that we tend to include all possible tools and processes (from so called laundry list of standards) to fill in the gaps while coming up with a testing strategy. Like I said above testing strategy has to be comprehensive yet cost effective and productive. What this pretty much means is; there is no objective way of arriving at one.

OSGI, wish I had it before | from an architect’s mind

This is one of my old blogs – that i had published on an internal site, re-sharing it here.

Very recently i worked on a project and the architecture (from highlevel) had following components (among many other):

1. Module builds and Dependency management, we had tried to relate the pieces of functionality into one and termed them meaningful modules e.g. A is a module which owns a meaningful business set of functionality, B is another module, which owns another set of meaningful functionality (meaningful business entity), so forth and so on. With in each module we had tried to keep UI and Service layers separate. Also, things which were larger and couldn’t fit in module boundaries or were non functional in nature had all been assigned to CORE module (s). There were also UI components, which were not specific to any module, say menu (header et al) and were bundled in to CORE-UI as a module.

The idea was to have dependencies exposed through API bundles e.g. A-api-1.0-snapshot.jar will contain all the exposed service contracts, and a B module could depend on any class/interface from this bundle. However, A-impl-* couldn’ t be a dependency for B module as it was supposed to contain implementations (concrete internal implementations – including exposed contract impls).

Dependency management as a philosophy was implemented as part of overall “deployable unit” (inter module) and also internal to each of these modules (intra module). E.g. A could depend on B and C, but C and B couldn’t depend on A.

we wanted unidirectional dependencies in system. Besides, with in each module UI could depend on Service and this dependency is also strictly unidirectional. E.g. a Action bean could depend on Service but a service MUST NOT depend on AB.

2. Versions and version management, versioning for us was part of the bigger framework and hence we wanted to have this in place for each our modules. All the modules were marked with a version which is independent of any other module’s versioning.

My original idea was to have 3 digit versioning, say 1.0.0, where in first digit is your build number, second digit is the incremental build and third digit should either be a snapshot build or a build which was part of the release. So, If I could set up the scheduled builds (CI) the build should be core-impl-1.0.snaphost.jar. We could version this module build to be 1.0.3 if we release this out.

Digit 1 – Build no.
Digit 2 – Major build no.
Digit 3 – Incremental build or snapshot build or patch build. I don’t think we need any more digits in there.

3. Builds and change control, build artifact in our case was an enterprise archive file. EAR is a composite bundle of module builds (jars) with a web application definition with all JSPs and site resources (required) bundled in WAR inside EAR. I wanted us to be able to version the EAR build different from module builds, which is how the system was designed. However from change control perspective our builds were still dependent on branching of code base, which is a tedious process. I wanted to be able to specify a specific version to be bundled in an EAR for a given module and avoid all “manual” work of creating the patches. I wanted us to be able to control the code base using versions and avoid using branches etc. e.g. I wanted A bundle to be frozen for a given period and wanted to release other modules in a build. I should be able to set the build scripts in EAR to be able to pick – A-impl-<myspecifiedversion>.jar vs picking the latest snapshot version of jar. This would mean we wouldn’t have to do anything manual and could control the build dynamically for a release.

There are obvious advantages with what I wanted to achieve: – extensibility
– maintainability
– testability

by

– Decoupling of layers
– Decoupling of modules – easier code management

so forth and so on, and it did fit with our overall architecrual philosphy of using “Service Oriented Approach” really well. All the folks familiar with Sping based development would easily identify with “Service oriented” approach in form of bundles.

This was okay till I could roll it out as a document, however we needed to also enforce this using
a “automated” mechanism. No price for guessing we decided to do this in “build” scripts. I used ANT + IVY against MAVEN (that story for another day).

So, we could define the dependency at granular level and could also control the direction of dependency and hence could control the “kind of coupling” we would allow.

I always thought, it was okay to control the dependency at static level, but what if i could check the same dynamically and control it as part of bundle metadata. That would have given me flexibility to actually pick and choose the “contracts” i would like to expose at package level vs. JAR level. That would also let me define dependency at version level and would still let me run an old version of my JAR in same VM. There were many thoughts and ideas … and i had no time to look around to find a “dynamic” yet easy and powerful solution.

OSGI – comes to the rescue for all of us who would like to pursue such a philosophy for our projects. It is pretty mature, has got many versions of implementation in different open source projects (including one in Spring). It practically liberates you from:
– JAR hell

– Classpath hell

– and dependency hell 🙂

e.g. in my case, I could remove all of API and IMPL concept and control all the dependencies at package level by specifying simple configuration settings in JAR manifest files and make the dependency control work for me.

Consider a longer running project, where in a dependency “commons package” will change generations, you would like to take advantage of new features but can’t as there is a lot of old code (call it legacy), which still depends on an old package or a class or a method, which is no longer supported, well now you can do that using OSGI, you can keep both the versions of commons package and work in same VM.

There are many more advantages – which are better read over OSGI initiatives. Go have a read and liberate yourself and your project from the mess of managing dependency 🙂

To add to this, Java (VM) is going to introduce the concept of “Super packages” as part of Java 5 and that is going to be another big big change to language itself post Java 7. The concept is largely same as that of OSGI, just that it will have syntactical support right out of Java and dependencies would not need be specified as part of manifestation files. Besides, i think you might need to deal with another archive

(MAR) :P.
Proposed code may look like:
@version(1.0.0)
@ExportResources{(abc.jpg, icon.gif, script.js;} @ModuleAttributes{?,?} @ImportPolicy{ImportPolicyImplementation.class} super package com.sun.myModule {
export com.sun.myModule.myStuff.*;
export com.sun.myModule.yourStuff.Interface; com.sun.myModule.myStuff; com.sun.myModule.yourStuff; com.sun.SomeOtherModule.theirStuff;
import org.someOpenSource.someCoolStuff;
}
go have a read on following:
̈Static module system – JSR 277
̈Dynamic module system (OSGI) – JSR 291
̈VM and Language support – JSR 294

what is a creative technologist ?

Why do we need creative technologist? Here is a nice read on how creative contextual application of technology can have profound impact on our society.

http://www.sapientnitroapac.com/2015/05/04/creative-technology-connecting-the-next-billion

So, while you may hear a lot about this role / profile esp. in agency context –

what really makes a creative technologist?

creative-tech

Creative technologist has to solve for specific problems – commerce, marketing, service, product – a need, a behavioral change or what have you. So a creative technologist would need to really operate in the context of an overall solution.

It is important to operate from the position of an expert – expertise could be the domain expertise, knowledge of ecosystem (device or channel) or social context.

Creative use of technology is as relevant as the understanding of inner beauty of a product/service connect with the consumer – how does the consumer feel while using your product or a service and technology just becomes a medium to make that happen. So the context and understanding of experience becomes really important for a creative technologist.

and finally, as new technologies emerge – there would be an opportunity to build new ideas into reality. So a creative technologist of today needs to be a maker – someone who can bring digital-physical together, can build new products that may or may not have any precedence.

Steve jobs is probably the greatest creative technologist that  the world has seen – he understood consumers and solved for the unforeseen needs using technology with UX at the center of all his products.

There are a bunch of organization focused on building new profiles, promoting hybrid talents.  Although there are different views on developing this sort of mindset – the agenda and outcome is probably similar i.e. to solve for complex problems with a new hybrid point of view. MIT media lab is one of the first one on my list where technologist, biologist, architects, environmentalists (and many other disciplines) are trying to solve real human problems using technology – it probably is the most advanced culture to groom such a hybrid talent. On the other hand agencies like wieden+kennedy have developed their processes in such a way that technology and creative work on everything together – in fact they have gone a step further and have decided to do away with domains.

I believe that next set of challenges would require a new set of skills and new kind of solutions. These ideas and solutions for a more complex modern world will come from hybrid talent – creative technologist is right at the center of it because it directly deals with Human Experience and Technology.

a framework for building meaningful omnichannel experiences

I believe in delivering and building meaningful experiences for todays consumers. As a creative-technologist i thought it would be meaningful to create a simple framework to bring together my thought process in one simple frame.

so here is what I believe in:

framework | meaning experiences

Couple of important elements to this frame:

Fundamentally – idea and technology need to work together for a meaningful experience. Technology is as important as the idea.

Data acts as a bridge between Science and Art of story telling. Data is the glue that binds marketing objectives and story telling channels.

Always Design for Humans – think about Human senses, Natural communication and Human ecosystem in mind when thinking of designing the experiences. If we design for humans as a philosophy, most likely than not we would be able to create/achieve Empathetic natural experiences as an outcome.

are you a leader ?

Leadership to me is defined by three most important attributes:

    Authenticity:
    A leader has to be authentic in their approach towards people, matters and situations. It is imperative to command respect that you have original ideas and thoughts but most importantly you also have a style of your own. Many folks may or may not like it but I believe it is imperative to have a style that you can call your own.
    Expertise:
    Leadership is as much about inspiration as it is about expertise. A true leader stands for something, they should lead from the point of expertise and not necessarily general best practices. Expertise is the only way in which you can lead smarter folks (in many cases smarter than you), you can probably understand their POV and most likely add value to their thoughts and ideas. Expertise could also be a connecting the dots expertise, where in you can bring together a bunch of different expert skills together provide it a specific direction and turn the sum total into a beautiful piece of product/service/art.
    Risk taking ability:
    True test of leadership is when you have to stand up for your people, your values and your preachings. A leader would put his neck on the line in making the decisions that needs to be made, but would not vouch for the credit undue. In my experience, leadership can be earned just by exemplifying this one characteristic in situations where team looks up to decision making, decisive outcomes and predicting success and glory in the face of uncertainties.

break boundaries, you are born to

Breaking boundaries; come to hear of it in all parks of life. For argument sake lets agree on one common definition – in any discipline an individual goes beyond their current capacity, he/she has broken a personal boundary. e.g. an athlete may break a personal record or a benchmark record but both are breaking boundaries in my book.

Let’s look at some great examples of our history that fueled the evolution of human race (truly). Invention of fire and wheel were two big inventions of past. Transportation and Press were two greatest vehicles for information and people exchange that transformed societies. Internet and Telecommunication just shrunk the world in terms of Space and Time (very very crucial to where we find ourselves in todays world). In between there have been events of huge importance like race to moon – in 60s human race broke a huge boundary of reaching an intergalactic body (Moon) and since than we have been exploring other planets, satellites and galaxies (far far away in our universe).

Like you, i also believe that we will outdo ourselves in coming times in every possible way. My conviction in human races ability comes from my belief in how we are wired (genetically). Nature has built us like an open hardware and software – so that we can plug extensions, enhance our firmware and possibly create a new human species (be in-charge of our evolution). If you dont believe me ask yourselves following questions:

– why are kids of next generation smarter?

– when you look into the starry sky, why do you feel inquisite?

– why does status-quo feel boring?

You could argue that since nature created us with a bunch of limitations – our sense of color, our ability to distinguish frequencies (audible) and our ability to sense a certain spectrum limited by the five senses and than further limited by the limitation of these senses. One of the important part of our body that has not been proven to have any limitations is what we have between the ears – Brain. Its a fantastic computer, it is designed such a way that it can accept data from any source and make sense out of it. (various experiments around the world vis a vis extending, substituting senses have proven).

here is a great Ted talk on Next Species:

https://embed-ssl.ted.com/talks/juan_enriquez_shares_mindboggling_new_science.html
and here is one from MIT on practical technology evolution:

I think we were born to believe that we can outdo ourselves, nature designed us in a way that we continue to look for “Upgrades” … and i think its time that we believe it and live it in day to day life.

Real – Virtual – Emotional | connections – impact on our life

Everyday we wake up with a preoccupied set of thoughts that are connected to an outcome for the day, through the day. That set of thoughts continue to get complicated as we live through the day and the connections keep getting complex and complex; but somehow we live through it, make sense out of it.

IOT (the next wave of Digital era) isn’t really different, if you re analyze your day – you would realise that bunch of those thoughts and connections are now being made across “Digital” and “Physical” (Social Networks, Messaging, Emails, Voice, Websites, Mobile phones and so on). And if you were to expand this idea to everyday objects in your life (your watch, shoes, shirt, belt, bag, kitchen appliances, cars and so on) you would be able to wrap your head around the notion of “Highly Connected World” – Internet of Everything – IOT/IOE.

Simple truth about IOT is that there are going to be “n” number of connections that will be made either with your involvement or without your involvement but in the context of YOU (who you are, what you do, wherever you are and so on). All of these connections will help in creating an understanding of your life, events, activities, behaviors as has been the case in non-digital world. (before digital arrived) You were making all these connections by using an organic matter (your Brain) that internally leveraged millions of neurons in your brain (which is an interesting connections of senses) to sketch out the meanings that helped you take next course of action.

Think about –  your observation of your friend’s behaviour and suddenly you felt concerned about him. Series of events that you noticed about how your friend were missing classes, or hadn’t met you for multiple days or hasn’t been reachable / communicable – from these observations you derived insights that led you to reach out to him to check in on him.

Similarly, in new era – machines check on your routine, share observations, and derive insights (all automated – AI) that helps you always be in know how about your friends/family.

As the marketers learn to truly leverage Digital, it will become evidently clear that every single “Micro” moment that gets created between Brand and the Consumer lives in our brain somewhere. So, every time a new message or new “Micro” moment that gets thrown at us if doesn’t connect back to the previous one – the impact becomes lesser as our brain can not make the connection. And if there aren’t enough connections about the same object and the messaging it never really becomes an embedded experience for the consumer. So “Connections” in itself is an important pattern to explore and implement when Marketers are trying to develop consumers and keep them loyal to their brand.

That’s the real true purpose of all this – how do we use the technology to impact (Truly) the way we live, help us become better human beings. That is the true impact of “Connections” – Assembly of events that turn something into a memorable experience that helps me truly stay connected (human 2 human).

signal and the noise a Data Science problem – not very different from our everyday problems?

Data, Big Data, Small Data and Science to find meaning, knowledge nuggets from this massive amount of information is the sexiest job of 21st century according to a leading publication.  Circa 2005, web 2.0 bustling with energy and ideas gets a massive thumbs up from millions and millions of people by joining the bandwagon. As the Social media took center stage by 2007 and Modern mobile devices became a rage, user generated content took center stage. Internet Content created by people dwarfed the Content that was created by centralized entities (corporates, governments and institutes).

Circa 2010, machines joined internet revolution – now the machines (IOT surge would mean the machines will continue to rise in numbers)  that connected to internet outnumbered the people that were online. A new kind of data, content and knowledge is being created. But more importantly, a lot more of data is being created. This led to a new web paradigm – web 3.0 – semantic web, an internet that could contextualise, personalize and filter through the noise and provide people with what they wanted.

Fundamentally, new generation that leaves their valuable data online, with their favorite brands  expects  that this data would be used to create meaningful experiences for them. Experiences, that are personalized to individual needs, contextualized to their environment, and suited to their needs and wants.

Thankfully, the internet giants haven’t been busy just gathering your and mine data but has also been developing technologies and solutions to solve our problems. One such advancement has been through “Big Data” platforms. There has been a surge in new and reinvigorated interest in Databases, Processing technologies, Distributed computing platforms, Data Science and Machine learning algorithms in helping solve the problem of mining the “intelligence” from this massive data that we create and consume on everyday basis.

Data Science helps us find answers, look through the signals to cut out the noise to get to meaningful events, intelligence that could give us beautiful insights, knowledge nuggets about the world, our interactions with the wold an insight into our behaviour that (even) we might be unaware of (consciously). We are at a tipping point in information revolution, now we move from “discovering the intelligence” to “mastering the intelligence”.  You can see the glimpses of that future in science fiction movies – minority report, HER (one of my favorite future prediction).

Have you ever been asked: “What is your passion?” 

I am sure that question has been posed to you at every step of the life – when you were a student, professional or whenever you found life and career started to stagnate. As a student – i had so many things going for me, so many things that excited me and yet i didnt really know what is it that i was passionate about ? what interested me the most? I would think that i am yet to find my passion – not everyone is lucky to find it early enough in life. Interestingly (but not surprisingly), these problems are similar to data problems – we always learn and derive from our life experiences. A student has to rely on his “Data Science” skills to cut the noise – his model of making sense is very different from the model that a professional builds for himself when he is looking for the right Job. But we all are using “Data Science” to walk through a maze – all the time, trying to derive from all that we have gathered (data as experiences) right through our lives.

There are so many examples in dailylife that we play data scientist – a parent goes through barrage of data to select a right school for their kid, we are always analysing our friends behaviour – in fact many a times predicting their reactions to situations. In short we are all always using some form of Maths, Stats and Algorithms to solve our every day lives – but fail to recognise that, in many ways thats a fault in the way we have been or are being taught these subjects (dont want to even start on this).

My humble submission here is that “Data” problem is not for selected few, it is for all of us – like Technology became essential way of life, professional life – data would becomes significantly more important, so we all need to recognize that and make peace with it. Adopt Data and the Data Scientist with in, use it for your advantage.