Tech Press Review - Sept 19th 2023
---
In a recent article titled "Serverless Bun vs Node: Benchmarking on AWS Lambda," the author explores the performance of a new contender, Bun, as a replacement for the popular NodeJS runtime. Bun claims to offer improved performance and developer experience while maintaining interoperability standards. To test these claims, the author devised three benchmark tests focusing on general processing performance, CRUD API, and cold start times.
In the general processing performance test, Bun showcased promising results. It processed logic at a speed of 3-4 times faster than NodeJS, which could benefit systems with CPU-bound and memory-bound workloads. The author performed a test involving generating and sorting 100,000 random numbers back-to-back, and Bun outperformed NodeJS in terms of speed.
Moving on to the CRUD API test, which simulates real-world scenarios, Bun still showed potential benefits. The test involved implementing a CRUD Update function that performs simple computations with input validation, retrieval from DynamoDB, and modification. Despite being CPU-heavy, Bun managed to hold its ground and provide comparable performance to NodeJS.
One crucial aspect to consider in serverless environments is cold start times. When a new container needs to be spun up to handle a request, the time taken is known as cold start time. NodeJS, being an officially-supported runtime, typically has optimized cold start times. With this in mind, the author tested Bun's cold start times, intentionally inducing them with a hello world function. Although Bun is an unoptimized runtime, it held its own and showed promise in this area.
Overall, the benchmark tests indicate that Bun has the potential to challenge NodeJS as a runtime for serverless function-based applications. With its impressive performance in general processing, CRUD API, and cold start scenarios, Bun showcases its capabilities as a viable alternative. Developers looking to improve performance and experience may find Bun to be a compelling choice.
Source => https://medium.com/@mitchellkossoris/serverless-bun-vs-node-benchmarking-on-aws-lambda-ecd4fe7c2fc2
---
Monash University scientists have received a grant of US$407,000 from Australia's National Intelligence and Security Discovery Research Grants program for their work on the "DishBrain." This semi-biological computer chip contains approximately 800,000 lab-grown human and mouse brain cells. The team demonstrated that the DishBrain showed signs of sentience by learning to play Pong within five minutes. The chip's micro-electrode array could read and stimulate brain cell activity. Researchers set up a version of Pong where the brain cells received electrical stimuli to represent the ball's position and distance from the paddle. If the paddle hit the ball, the cells received predictable stimulation, whereas if it missed, the cells experienced unpredictability for four seconds. This experiment marked the first time lab-grown brain cells were given the ability to sense and act on their environment. The scientists, in collaboration with Cortical Labs, believe that these programmable chips, combining biological computing with artificial intelligence, could surpass the performance of silicon-based hardware in the future. The DishBrain's advanced learning capabilities could revolutionize machine learning, particularly in autonomous vehicles, drones, and robots. It could provide a new type of machine intelligence that can continuously learn throughout its lifetime. The technology holds promise for machines that can adapt to change, learn new abilities without losing old ones, and optimize their computing power, memory, and energy usage. The grant will be used to further develop AI machines that replicate the learning capacity of biological neural networks, potentially replacing traditional silicon computing in the future.
Source => https://newatlas.com/computers/human-brain-chip-ai/
---
In today's world, code is everywhere, powering the systems that drive modern society. However, the increasing complexity of code has led to a rise in software failures and security vulnerabilities. From the faulty baggage-handling system at Denver International Airport to the software bug that halted trading on the Nasdaq stock exchange, these incidents can have significant consequences.
According to a recent survey, around three-quarters of examined applications contained at least one security flaw, with nearly one-fifth considered to be of high severity. Various methods have been identified to address these issues, including testing, debugging, and the use of tools like functional programming and code review. However, these methods are not foolproof and are not consistently implemented.
The AI revolution in software development aims to make programming, debugging, and code maintenance more accessible and efficient. Systems like GitHub Copilot, Amazon CodeWhisperer, and Tabnine use AI-powered coding and code completion to assist programmers in different programming languages. OpenAI's ChatGPT is another offering that allows non-programmers to interact with a language model for writing code.
While AI-powered programming tools have their benefits, they also raise concerns. The code generated by these systems can still contain security vulnerabilities, just like human-written code. A study found that around 40% of programs completed by GitHub Copilot contained vulnerabilities. These vulnerabilities can range from buffer overflows to SQL-injection attacks, and until a solution is found to automatically detect and fix them, AI-generated code may remain weak in these areas.
Ultimately, AI-powered programming tools provide convenience and support in software development, but caution is necessary. Understanding the limitations and potential risks associated with these systems is crucial as the industry moves forward with AI-driven advancements in coding.
Source => https://spectrum-ieee-org.cdn.ampproject.org/c/s/spectrum.ieee.org/amp/ai-software-2661136690
---
OpenAI CEO Sam Altman believes that building strong artificial intelligence (AI) models is necessary to mitigate the risks of AI in the future. Altman stated that his company's recent AI model, ChatGPT, was a "great public service" that helped people understand and reckon with the idea of powerful AI. He also stressed the importance of transparency and public notice when it comes to the deployment of AI.
Altman co-founded OpenAI in 2015 with Elon Musk and other AI researchers, with a mission to develop artificial general intelligence (AGI) that would be beneficial to humanity. They aimed to avoid reckless rushing to AGI development and to conduct their research transparently. OpenAI released its GPT-4 AI model in November, which had wide-reaching applications such as generating poems, passing the Uniform Bar Exam, and suggesting novel cocktail recipes.
Altman believes that AGI will lead to a new kind of society and that people need time to prepare, understand, and guide the development of AI technologies before AGI brings significant changes to work and human relationships. OpenAI is currently working on developing more advanced AI models, and Altman expressed the need for oversight and regulation of AI to prevent potentially harmful outcomes.
While AI technologies have the potential to bring advancements and benefits to society, Altman acknowledges the risks and uncertainties associated with their development. He called for proactive measures, including public involvement and governance, to ensure that AI is developed in a safe and beneficial manner for humanity.
Source => https://www.theatlantic.com/magazine/archive/2023/09/sam-altman-openai-chatgpt-gpt-4/674764/
---
At WWDC, Apple announced a new feature for iOS and macOS that provides predictive text recommendations as users type. This feature is powered by a Transformer language model, which is a departure from Apple's usual approach of prioritizing polish and perfection over large language models. While many details about the model remain unclear, some information has been uncovered.
The feature primarily completes individual words, with occasional suggestions for multiple words. The model's behavior was observed through an internal macOS application called AppleSpell. The predictive text model appears to be located in a specific folder in the operating system, containing Espresso model files used for typing.
The model's tokenizer includes a vocabulary set of 15,000 tokens, including special tokens, contractions, emojis, and regular words. It is unique in its emphasis on emojis and contractions, which caters to the context of text messages.
Based on the architecture of the model, it resembles the GPT-2 model developed by OpenAI, but with fewer decoder blocks. Apple's predictive text model has approximately 34 million parameters and a hidden size of 512 units, making it smaller and faster than GPT-2. While the model may not excel at generating full sentences, it performs well in suggesting the next word or two with high confidence.
In tests, the model's suggestions were instant and provided a good user experience. However, when writing full sentences, the results were not as inspiring compared to GPT-2 models. It remains to be seen how Apple will continue to develop and expand this feature in the future.
If you're interested in exploring this further, the code used to investigate the model is available on GitHub.
Source => https://jackcook.com/2023/09/08/predictive-text.html
---
Introducing Magick - a powerful AI development platform that brings the world of coding to everyone, regardless of their technical background. With Magick's easy-to-use no-code visual-builder interface, anyone can create and customize AI components without the need for coding expertise. Whether you're an experienced developer or just starting out with AI, this platform is designed to make building AI applications easy and enjoyable. Say goodbye to the days of writing lengthy lines of code, as Magick allows you to build world-class AI applications with a simple drag-and-drop approach. So why wait? Join Magick today and start creating amazing AI applications without ever needing to write a single line of code.
Source => https://www.magickml.com/
---
In today's tech world, efficiently handling infrastructure while maintaining stability and security can be challenging. But there's a solution that can make this process smoother and more collaborative. It's called GitOps, a methodology that combines version control, git workflows, and automation with infrastructure as code.
The traditional way of handling infrastructure as code can be problematic. For example, storing configuration files on a local machine instead of a git repository leads to no team collaboration and code reviews. Storing config files in a git repo without a review and approval process can result in no pull/merge requests and committing directly to the main branch. This lack of automated tests makes the infrastructure and app environment unstable. Additionally, updating the infrastructure and app environment manually leads to difficulties in tracing changes and finding mistakes only after they've been applied.
GitOps offers several benefits for infrastructure as code. Using Git provides robust version control capabilities, allowing for collaboration and rollbacks. It also offers a convenient choice, as developers and operations professionals are already familiar with it. In a GitOps workflow, there is a dedicated Git repository for the infrastructure, coupled with an associated DevOps pipeline. This allows for collaboration, testing, and approval of changes before being applied to the environment.
Automation is a key aspect of GitOps. Once changes are merged into the main branch, they are automatically applied to the infrastructure through a continuous deployment (CD) pipeline. There are two ways to apply these changes: pull deployment and push deployment. Pull deployment involves an agent actively pulling changes from the git repository and applying them to the environment. Push deployment, on the other hand, uses jobs in the application CI/CD pipeline to update the infrastructure or deploy new application versions.
One of the advantages of GitOps is the ease of rollback. By leveraging the power of Git, it's possible to track the history of changes and easily revert to a previous state if needed.
In conclusion, GitOps revolutionizes the way infrastructure and code are handled. By combining version control, git workflows, and automation, it makes the process smoother, more stable, and secure. GitOps ensures collaboration, testing, and easy rollback, ultimately improving the quality of infrastructure and its configuration.
Source => https://dev.to/arafetki/gitops-infra-as-code-done-right-2ojg