Secretary Recruitment
Part 2: Interviews
“Cherish the learning experience provided by the process. Not becoming a secretary does not mean that you won’t be able to participate in club projects. We shall open most of our club projects to the campus junta and people who have been connected with the club and shown potential and interest.”
I hope the experience of attempting the secretary tasks was fun and knowledgeable. Considering the advancements in Large Language Models - especially reasoning models and models fine-tuned for code generation - we tried to make tasks such that you’d have to do a decent amount of digging and research before starting to give any model the required prompts to generate your code. Looking at the number of responses, I feel you faced a significant challenge in attempting these problems - some probably gave up while others stuck around, ground their way through the difficulties, and made a submission. Congratulations to those who put effort into making a submission - complete or incomplete; because what we appreciate is the obsession, and willingness to stick around. Expect this kind of (as a matter of fact, way more nuanced and convoluted) problem statements in timed hackathons, internship challenges, and other competitions because as the Language Models get smarter, the companies adjust their hiring mechanisms to maximize the probability of flagging out people who do not have an understanding of what they are pasting from the model output. Being able to solve the F-th problem in Div. 2 using just a few prompts has become a common practice - and with a new, advanced model rolling out every single day this is just going to get worse.
This was just the foreplay. Trust me when I say that the rollercoaster ride hasn’t even begun. Interviews are going to be particularly interesting, and at least for me they were a transformative experience in terms of how I look at any 2nd year Position of Responsibility. Not only did I understand the humongous gaps in my knowledge and skills but also understood that abstracting things out might be convenient, but not at all optimal when it comes to optimizing programs. Using abstraction frameworks like Django for backend development, ORMs for querying databases, and SciKitLearn for Machine Learning Algorithms is recommended for quick outputs, but understanding what these abstractions are based on is equally important.
Knowledge Graphs
A knowledge graph is a way of structuring information by representing entities (like people, places, or concepts) and the relationships between them, similar to a semantic network. It’s like a big, interconnected database that helps us understand complex relationships between different pieces of data. Traversing the knowledge graph of a particular concept or technology helps you understand related concepts, have in-depth knowledge, and actually have an understanding of what is going on behind the hood - something that programming calls for. Rather than just knowing that XYZ task can be accomplished via the ABC library, you should understand the internals of the ABC library. When I say internals, I do not mean understanding things at the hardware layer - that devolves into a different domain altogether. Ask enough questions and try to make enough correlations to understand how the processes are functioning at a logical level.
We’re not only looking for great implementations and solutions for the problem statements but also for your basic understanding of the domain and how inquisitive you are to learn new things. If you know how to use Object-Relational Mapping to simplify interactions between object-oriented programming languages and relational databases, then we also expect you to understand how you can query an SQL database without using an ORM. If you’re implementing a client-server architecture for your web app and using Django for the backend, we expect you to understand how the communication is taking place between the two entities, how security is implemented at login and signup to keep user passwords secure, and how CSRF tokens work. If you’re using Mediapipe to access pre-trained models, you must also understand the architecture of those pre-trained models, what are the various processes and concepts used to implement that architecture, and how the training and inference of said architecture takes place. Construct a Knowledge Graph in your brain with Mediapipe as the initiation node and traverse an appropriate depth and breadth until you can understand things at a logical level.
The good ol’ method to do this is asking questions and trying to understand why particular things are done in a particular manner, and trying to ask “WHY” recursively until you’ve understood what’s actually happening. This is the usual approach we’re taught to come up with well-explained arguments in Parliamentary Debating, and I feel that this applies to each and every activity and event in life. The more modern approach - especially after the “Deep Research” feature of ChatGPT has been pushed to production is something you could use to get an “in-depth” understanding of stuff.
Explainability
Pretty self-explanatory - are you able to explain your approach and final solution along with your thought process and experimentation involved while solving the problem? It’s not tough to push code to Git Hub, but tough to organically think of approaches and test out multiple to find the most optimal approach. We might try to grill you on why you followed a particular process over the other or why you picked a particular algorithm over the other. Even if you have not done this during the experimentation and research phase of your product cycle, try to retrospectively do it - and if you find a better approach don’t shy away from telling us that you found a better approach after submitting the task. PS: We shall appreciate honesty over you trying to gaslight us into believing something. Humans are flawed and we acknowledge this and you accepting the flaws in your solution shall be preferred over you trying to cover those flaws.
Alignment
As coordinators, our objective from this process is not to objectively pick individual rising sophomores who are good at programming, but to form a diverse team that shall be able to - as a combined entity - carry out the mandate of the club. To understand the mandate and responsibilities, head back to the first blog of this couple. This ideology introduces metrics other than technical know-how while selecting candidates for this team, for instance, obsession, dedication, alignment with ideologies and culture, ability to work in a team, etc. So along with the technical preparation for the interview, try to think about these pointers as well.
I think this is more than enough content to brief you on the process and I am not in a mood to conclude with a cliche note, unlike last time for I initiated this piece with that very note. Stay calm and don’t be overwhelmed by the expansive knowledge of the internet and language models.
Sit Vis Vobiscum
Author: Himanshu Sharma