Technology

"Stay Hungry, Stay Foolish." ~ Steve Jobs


My Razorpay Interview Experience

During the online assessment for the Software Development Engineer Intern role at Razorpay, I was provided with a timed window to solve three questions. The first two questions were relatively easy and could be solved with regular LeetCode practice, whereas the third question was more challenging. It required solving a maximize the minimum or minimize the maximum kind of problem based on Binary Search On Answer and Graphs. I needed to construct a graph at each iteration of the binary search and check the reachability between two points. This problem turned out to be the deciding factor in the online assessment, and very few candidates were able to pass all the test cases. Although I passed all the test cases using Binary Search, I later learned that Anumoy solved the question using Dijkstra's algorithm.

Moving on to the interview, it was conducted on the Google Meet platform, and the interviewer began with a brief introduction, asking me about my name and background before introducing himself. He then proceeded to present me with two DSA problems, which were pasted on a Google Sheet.

The first problem was the Coin Change 2 problem, which is a standard DP problem. I explained my approach to the interviewer, starting with the recursive approach and explaining why my recurrence was working. After that, I coded a recursive solution and then memoized the solution. The interviewer appeared satisfied, and I informed him that I could further create a tabulation DP. However, he was content with the solution and moved on to the second question.

The second problem was the Excel Sheet Column Title problem, which is a LeetCode easy problem. However, I struggled with the question on the spot. I took some time to come up with a pattern, wrote it down, and coded it for the test cases provided by the interviewer. Unfortunately, my solution was not generalizing, and I ran out of time. The interviewer then asked me to come up with whatever I had thought of, and I wrote a general solution (although it was not), which passed all the test cases. I explained to him why my solution should work, and he dry ran the given test cases and some test cases that he had not pasted on the sheet. He appeared satisfied, and he concluded the interview with no parting words. I disconnected, feeling like I had messed up badly. Later, I discovered that my solution to the second question would fail in some corner cases, which my interviewer had missed.

To conclude, it is worth noting that I was not asked any questions other than DSA. I found this somewhat surprising, but it is essential to prepare for the topics from DBMS, OOPs and OS as most companies focus on them significantly. It is a relief to report that I was ultimately selected for the SDE Intern role at Razorpay.


Manas Pratim Biswas
Manas Pratim Biswas
Manas Pratim Biswas
Manas Pratim Biswas
Manas Pratim Biswas
Manas Pratim Biswas
Manas Pratim Biswas
Manas Pratim Biswas

My Ansys Interview Experience

I had the opportunity to interview for a Software Development Engineer Intern role with Ansys. The interview spanned over an hour and a half and was a rigorous technical examination of my skills and knowledge. Here is a detailed account of my interview experience.

The interview began with a deep dive into the Object-Oriented Programming (OOP) paradigm. The interviewer heavily grilled me on topics such as pointers and its types, void pointers, constructors and destructors, calling orders of constructors and destructors, and overloading them. Additionally, I was questioned on class versus object, and asked to dry run a code snippet involving pointers, polymorphism, abstraction, overloading, overriding, data hiding, encapsulation, inheritance and its types. The interviewer also asked me about abstract classes and its need, virtual classes, and operator overloading. It was clear that the interviewer was looking for a comprehensive understanding of OOP concepts, and their application in real-world scenarios.

During my preparation for the interview, I found the MySirG YouTube playlist on OOP by Saurabh Sukhla to be incredibly useful. His in-depth explanations and examples provided a solid foundation for my knowledge and understanding of OOP concepts.

Moving on from the OOP discussion, the interviewer presented me with three coding questions. The first problem was related to the Next Greater Element, which I solved using stacks. The second question involved finding the Longest Increasing Subsequence, which I solved using dynamic programming. Finally, the third question was about Detecting Cycles In An Undirected Graph. These questions were standard however they required a thorough understanding of the underlying data structures and algorithms. I was able to solve all three problems efficiently, which gave me a sense of confidence throughout the rest of the interview.

Overall, the interview was highly technical, and focused on testing my problem-solving skills, as well as my knowledge of computer science fundamentals with a very emphasis on OOP. The interviewer asked thoughtful and challenging questions, and I appreciated the opportunity to engage in a meaningful discussion on various topics. The interview experience at Ansys was rigorous, but fair, and provided me with an opportunity to showcase my skills and abilities.

In conclusion, my experience with the Ansys interview process was highly rewarding. The interviewer was professional, knowledgeable, and highly skilled, and the questions were challenging and thought-provoking. While the interview was rigorous and demanding, it was also a great opportunity for me to showcase my skills and knowledge. I am happy to share that I was offered a 6-month internship at Ansys.


Manas Pratim Biswas
Manas Pratim Biswas
Manas Pratim Biswas
Manas Pratim Biswas
Manas Pratim Biswas
Manas Pratim Biswas
Manas Pratim Biswas

My DRDO Internship Experience

"The work at DRDO is confidential however, I have tried to give a brief overview of the technology and work I have done there."

During my internship at DRDO , I had the opportunity to work on web servers and gain valuable knowledge about the technology. A web server is a program that processes network requests and serves users with files that create web pages, using Hypertext Transfer Protocol (HTTP). It stores HTTP files, including HTML documents, images, CSS stylesheets, and JavaScript files, which are delivered to clients upon request.

On the hardware side, a web server is a computer that stores web server software and website component files. It connects to the Internet and supports physical data interchange with other devices connected to the web. On the software side, a web server includes several parts that control how web users access hosted files. At a minimum, it includes an HTTP server, which understands URLs and HTTP , and can be accessed through the domain names of the websites it stores, delivering the content of these hosted websites to the end user's device.

Whenever a browser needs a file that is hosted on a web server , the browser requests the file via HTTP . The (software) HTTP server accepts the request, finds the requested document, and sends it back to the browser through HTTP . Hypertext Transfer Protocol (HTTP) is an application-layer protocol for transmitting hypermedia documents, such as HTML. It was designed for communication between web browsers and web servers , but it can also be used for other purposes. HTTP follows a classical client-server model, with a client opening a connection to make a request, then waiting until it receives a response.

HTTPS , which is the secure version of HTTP, encrypts data to increase the security of data transfer. This is particularly important when users transmit sensitive data, such as when logging into a bank account, email service, or health insurance provider.

During my training, I studied two web servers in particular: Apache Web Server (XAMPP) and Nginx Web Server . Both of them have their advantages and disadvantages. I worked on XAMPP , a free and open-source cross-platform web server solution stack package developed by Apache Friends , consisting mainly of the Apache HTTP Server , MariaDB database, and interpreters for scripts written in the PHP and Perl programming languages .

For Nginx , the web servers was used directly without the need for any third-party addons or software. Both of the web servers were installed and mounted on my local system and tested to gain a better understanding of their functions and performance.

Now I present a detailed workflow for parsing Apache and Nginx configuration files using Python . The aim of the process was to extract important information and fields from the configuration files. The workflow consisted of the following steps.

Setting up the web servers - First, XAMPP was installed, and a web server was set up on my local machine. A sample dummy webpage was hosted to test the server's functionality.

Studying the Apache Configuration File - To begin the parsing process, the file structure of the Apache Web Server was studied in depth. I focused on the structure and syntax of the Apache configuration file, which is the httpd.conf file. I observed the file in detail and noted its location in my machine, which is /Applications/XAMPP/xamppfiles/etc

Parsing Apache Configuration File using Python - The information was extracted using a method called parsing using the Python programming language. I chose to use Python due to its rich availability of libraries. The parser that was used for parsing httpd.conf is apacheconfig .

I parsed several important fields such as the version of the web server , listening ports, user and group , server name, error log directory, log levels, log format , custom log configuration , other alias webpath , and access control of the httpd.conf file. This process allowed me to extract crucial information that provided me with insight into the setup and status of the web server.

Studying the Nginx Configuration File - Next, I attempted to extract similar fields from the Nginx Web Server. To do this, I hosted a dummy website using Nginx Web Server. I studied the file structure of the Nginx Web Server and specifically the syntax of the Nginx configuration file, which is the nginx.conf file. I noted its location in my machine, which was /opt/homebrew/etc/nginx/nginx.conf

Parsing Nginx Configuration File using Python - Similar techniques of Parsing using the Python programming language were used to extract fields such as the conf file name, user, error logs, PID, worker connections, listening ports, server name, location, error page, and access control of the nginx.conf file. The parser that was used for parsing nginx.conf is crossplane.

Github Link to all the source code.

All of these important fields were printed in the terminal. However, a better modification of this could have been to write the parsed results in a file using File Handling in Python. In conclusion, parsing configuration files of web servers such as Apache and Nginx using Python can be a valuable tool for gathering important information about the web server's set-up and status. The workflow for parsing the configuration files includes setting up the web server, studying the configuration file, and parsing it using the Python programming language. By following this workflow, fields such as the version of the web server, listening ports, user and group, server name, error log directory, and more can be extracted. The parsers used for parsing the configuration files are apacheconfig and crossplane , and the parsed results can be printed in the terminal or written to a file using file handling in Python. Overall, this workflow provides a useful method for obtaining valuable information about web servers and their configurations.


Manas Pratim Biswas
Manas Pratim Biswas
Manas Pratim Biswas
Manas Pratim Biswas

My CU Internship Experience

During my summer internship at Calcutta University , I had the opportunity to work on a project that involved using micro-controllers to collect and analyze water data. The project required a good understanding of various technologies such as NodeMCU, Arduino, Ultrasonic Depth Sensor, and ARIMA time-series model. In this article, I will discuss my experience working on this project and provide insights on the different aspects of the project.

NodeMCU is a microcontroller that is based on the ESP8266 Wi-Fi module. It has 11 digital input/output pins and one analog input pin. The NodeMCU operates at 3.3V and can be powered via a micro-USB port. It also has Wi-Fi connectivity, which makes it easy to connect to the internet and send data to a cloud platform. During my internship, I learned about the various pins present in NodeMCU and how to use them for different purposes.

Synchronization of NodeMCU with ArduinoIDE was also a crucial part of the project. The ArduinoIDE is an open-source software that is used to program microcontrollers such as Arduino and NodeMCU. It provides an easy-to-use interface to write, upload, and debug code. Synchronizing NodeMCU with ArduinoIDE required installing the NodeMCU board package in ArduinoIDE and selecting the correct board and port settings.

Another key aspect of the project was the synchronization of NodeMCU with Ultrasonic Depth Sensor. The Ultrasonic Depth Sensor is a device that is used to measure the distance of an object by sending and receiving ultrasonic waves. It is commonly used in robotics and automation. Synchronizing NodeMCU with Ultrasonic Depth Sensor required connecting the sensor to NodeMCU and writing the appropriate code to read and process the sensor data.

The project also involved comparing ESP-32 and NodeMCU. ESP-32 is another microcontroller that is similar to NodeMCU but has more advanced features such as Bluetooth and dual-core processing. The comparison helped me understand the pros and cons of each microcontroller and choose the appropriate one for the project.

Synchronization of NodeMCUs was also an important part of the project as it allowed to communicate between multiple microcontrollers. Identifying the MAC address of NodeMCU was crucial for synchronizing multiple NodeMCUs. The MAC address is a unique identifier that is assigned to the Wi-Fi module of NodeMCU. It is used to identify the device on a network.

Two-way communication between NodeMCUs was another key feature of the project. It allowed to send and receive data between two NodeMCUs. Many-to-one communication between NodeMCUs was also important as it allowed me to send data from multiple NodeMCUs to a single NodeMCU. One-to-one communication of NodeMCUs was also important as it allowed me to send data between two NodeMCUs without interference from other devices on the network.

Multicasting multiple NodeMCUs to a single NodeMCU was another feature of the project. It allowed me to send data from multiple NodeMCUs to a single NodeMCU at the same time. This was useful in situations where I needed to collect data from multiple sources and send it to a single location for analysis.

I also connected HC-SRO4 with NodeMCU and established one-to-one communication. HC-SRO4 is an Ultrasonic Sensor that is commonly used in robotics and automation. I wrote the appropriate code to read and process the sensor data and send it to the cloud platform for analysis.

Connecting the HC-SRO4 with NodeMCU and multicasting was also a part of the project. This allowed me to send data from multiple HC-SRO4 sensors to a single NodeMCU at the same time. This was useful in situations where I needed to collect data from multiple sensors and multicast it to a single NodeMCU.

Another important aspect of my internship was learning about the different communication protocols used by the NodeMCU. One-to-one communication involves connecting two NodeMCUs through a common network and exchanging data between them. Many-to-one communication involves connecting multiple NodeMCUs to a single NodeMCU to receive and send data. Lastly, multicasting involves sending data from multiple NodeMCUs to a single NodeMCU. Through these various communication protocols, I gained an understanding of how data can be efficiently transmitted and exchanged between different microcontrollers.

During the later part of my internship, I worked on integrating the HC-SRO4 sensor with the NodeMCU to send data to the ThingSpeak platform. The HC-SRO4 is an ultrasonic depth sensor that is commonly used to measure distances in water tanks. By connecting this sensor to the NodeMCU, I was able to collect real-time data on water levels in the tank. This data was then sent to the ThingSpeak platform where it was analyzed and displayed visually for better understanding.

Furthermore, I also worked on developing a code for detecting leakage in a water tank using the HC-SRO4 sensor and water flow sensor. The water flow sensor detects the flow rate of water through a pipe, and by comparing it with the water level data collected by the HC-SRO4 sensor, any sudden drop in water levels due to leakage could be detected. This code was tested and proved to be highly accurate in detecting any potential leaks in the tank.

Towards the end of my internship, I worked on developing a future forecasting model using the ARIMA time-series model. ARIMA is a statistical model used to analyze and forecast time-series data. By using this model, I was able to predict the future consumption of water based on the historical data collected by the NodeMCU and the HC-SRO4 sensor. This forecasting model was considerably accurate and proved to be extremely useful in predicting the future demand for water in the tank.

Overall, my summer internship at Calcutta University provided me with invaluable hands-on experience in working with microcontrollers and sensors, as well as data analysis and forecasting models. I gained a deep understanding of the various technologies used in the field of IoT and their practical applications in real-world scenarios. Through this internship, I was able to sharpen my technical skills and also learned the importance of teamwork and collaboration in achieving common goals. The knowledge and skills I gained during this internship will undoubtedly be useful in my future career as a professional in the field of IoT and Software Development in general.

Github Link to all the source code.


Experience the Realism of Football Like Never Before: The Technical Wizardry of FIFA 23's HyperMotion2

Introduction:

FIFA 23 , the latest release in EA Sports' popular football video game franchise, features a new gameplay system called "HyperMotion2", which uses machine learning algorithms to create over 6,000 in-game animations. Additionally, the game's "Technical Dribbling" system uses the "Active Touch" system to improve players' dribbling and turning with more responsiveness. This article provides an in-depth technical analysis of the machine learning algorithms behind HyperMotion2 and the techniques used to incorporate the animations into the game.

HyperMotion2 and Machine Learning:

The HyperMotion2 system captures data from real-life football matches using advanced machine learning algorithms. The machine learning algorithms analyze the data and identify patterns, allowing the system to create realistic in-game animations that accurately replicate the movements of real-life players.

The machine learning algorithms used in HyperMotion2 are based on deep learning neural networks , which are designed to simulate the behavior of the human brain. These neural networks consist of layers of interconnected nodes that process and analyze data, allowing the system to identify patterns and make predictions based on past experience.

To train the machine learning algorithms used in HyperMotion2, EA Sports used a large dataset of football matches captured with advanced motion-capture technology. The dataset includes thousands of hours of footage, which was carefully analyzed and labeled by a team of experts to ensure the accuracy of the data.

A technique called backpropagation was used by EA Sports. Backpropagation is a technique used to adjust the parameters of a neural network to minimize the difference between the network's predictions and the actual output. In the case of HyperMotion2, backpropagation is used to adjust the parameters of the neural network to ensure that the in-game animations accurately replicate the movements of real-life players.

The machine learning algorithms used in HyperMotion2 are able to identify patterns in the data and use these patterns to create new animations that accurately replicate the movements of real-life players. The system is able to generate over 6,000 unique animations, each with its own set of variables and parameters that determine how the animation will be executed in-game.

Incorporating Animations into the Game:

To incorporate the animations created by the HyperMotion2 system into the game, EA Sports used a combination of motion-capture technology and traditional animation techniques. The motion-capture technology was used to capture the movements of real-life players, while traditional animation techniques were used to refine and enhance the animations.

The motion-capture technology used by EA Sports is one of the most advanced in the industry. It involves placing sensors on the bodies of real-life players and capturing their movements in real-time. The data captured by the sensors is then fed into the machine learning algorithms used by HyperMotion2, which use this data to create new animations that accurately replicate the movements of the players.

Once the animations have been created by the machine learning algorithms, they are refined and enhanced using traditional animation techniques. This involves adjusting the timing, spacing, and other parameters of the animations to ensure they look as realistic as possible.

Features of HyperMotion2

Technical Dribbling is a brand new dribbling system that allows attackers to calculate their path to the ball more intelligently. Animation transitions are informed by machine learning between every dribble touch, delivering next-gen visuals and variety with the ball at your feet.

The ML-Jockey system uses machine learning technology to learn from real-world defensive situations and movement taken from advanced 11v11 match capture data to rewrite the jockeying system in FIFA 23, creating more natural animations as defenders face up to oncoming attackers.

AcceleRATE provides a new layer of personality to how players reach their top speed in FIFA 23. From explosive acceleration off the mark to more controlled or lengthy changes of pace, players will now reach their top speed at different rates to create more variety in player movement and gameplay.

Composed Ball Striking is a new feature that allows for more natural transition between controlling the ball and shooting. Combining precision and fluidity, longer high-quality animation creates one continuous motion as players trap the ball, rotate, and strike the ball towards the target.

Kinetic Goalkeeper vs Header Battles is an evolution in two-player interaction technology that syncs player animations as goalkeepers rise to meet crosses against opposing attacking players, creating cleaner outcomes as keepers claim aerial balls.

Epilogue:

HyperMotion2 is a groundbreaking system that uses advanced machine learning algorithms to create over 6,000 in-game animations that accurately replicate the movements of real-life football players. The system is based on deep learning neural networks, which analyze data from real-life football matches to identify patterns and create new animations. Overall, the HyperMotion2 system represents a major step forward in the development of football video games, and is likely to set a new standard for realism and immersion in the genre.


Exploring the Technical Aspects of Web3 and Blockchain: A Deep Dive into the Technologies Behind Bitcoin and Ethereum

Introduction

The internet has revolutionized the way we interact, conduct business, and communicate with each other. However, despite the tremendous progress we have made, the internet is still far from perfect. Centralization, data privacy concerns, and the lack of transparency are some of the issues that plague the internet today. This is where web3 and blockchain come in. Web3, also known as the decentralized web, is the next evolution of the internet, and it is built on the foundation of blockchain technology. In this blog post, we will explore the history of web3 and blockchain, the features of blockchain, the technical background, and the technologies used in blockchain-based cryptocurrencies like Bitcoin and Ethereum.

The History of Web3 and Blockchain

The concept of web3 has been around since the early 2000s when Tim Berners-Lee, the inventor of the World Wide Web, envisioned a web of linked data that would allow machines to communicate with each other. However, the idea didn't gain much traction until the advent of blockchain technology.

The first blockchain was introduced in 2008 by an unknown person or group of people who went by the name Satoshi Nakamoto. This blockchain was the foundation of the world's first cryptocurrency, Bitcoin. Bitcoin was introduced as a peer-to-peer electronic cash system that would allow users to send and receive payments without the need for intermediaries like banks or payment processors. Since then, blockchain technology has evolved, and it is now being used in various applications beyond cryptocurrencies.

The Features of Blockchain

At its core, blockchain is a distributed database that stores information across a network of computers. The information stored on the blockchain is immutable, meaning that once it is added to the blockchain, it cannot be modified or deleted. The database is maintained by a network of nodes, and every node has a copy of the entire database. When a new transaction is added to the blockchain, it is verified by the network of nodes, and if it is valid, it is added to the blockchain.

The technical background of Blockchain

Blockchain technology uses cryptography to ensure that transactions on the blockchain are secure and verifiable. When a new transaction is added to the blockchain, it is broadcast to the network, and the nodes on the network use complex mathematical algorithms to verify the transaction. Once the transaction is verified, it is added to a block, which is then added to the blockchain. The blocks are linked together in a chain, hence the name "blockchain."

The technologies used in Blockchain

Blockchain technology is built on several key technologies, including:

1. Cryptography: This is the practice of using mathematical algorithms to secure information.
2. Distributed Systems: A distributed system is a network of computers that work together to accomplish a common goal.
3. Consensus algorithms: These are algorithms used to ensure that the nodes on the network agree on the state of the blockchain.
4. Smart Contracts: These are self-executing contracts that are programmed to execute when certain conditions are met.
5. Peer-to-peer networking: This is a type of network where each node on the network can communicate with other nodes without the need for a central server.

Bitcoin and Ethereum

Bitcoin was the first cryptocurrency to use blockchain technology. It was introduced in 2008, and it quickly gained popularity as an alternative to traditional currencies. Bitcoin is decentralized, meaning that it is not controlled by any government or financial institution. It uses a proof-of-work consensus algorithm to verify transactions, which involves solving complex mathematical equations.

Ethereum, on the other hand, is a blockchain-based platform that allows developers to build decentralized applications (Dapps) on top of its blockchain. Ethereum uses a different consensus algorithm called proof-of-stake, which is less energy-intensive than proof-of-work.

Epilogue

Web3 and blockchain technology have the potential to revolutionize the way we interact with the internet and conduct transactions. By removing intermediaries and enabling peer-to-peer transactions, blockchain-based cryptocurrencies like Bitcoin and Ethereum provide a more transparent and secure way of exchanging value. Moreover, the introduction of smart contracts and decentralized applications has opened up a world of possibilities for building decentralized services that are not controlled by a single entity. This has the potential to disrupt traditional industries and create new business models.

As web3 and blockchain technology continue to evolve, we can expect to see more innovative use cases and applications. From decentralized finance to digital identity and voting systems, the potential for web3 and blockchain technology is truly limitless. While web3 and blockchain technology may still be in their infancy, they have the potential to transform the internet and the way we interact with it. As we move towards a more decentralized future, it is essential that we embrace these technologies and explore the possibilities they offer.

Manas Pratim Biswas
Manas Pratim Biswas