AI Advisory - High-speed learning, Applied past lessons, Lot of scars to build consistent and low latency and highly accurate solutions :)
Keep Going!!!
Deep Learning - Machine Learning - Data(base), NLP, Video - SQL Learning's - Startups - (Learn - Code - Coach - Teach - Innovate) - Retail - Supply Chain
AI Advisory - High-speed learning, Applied past lessons, Lot of scars to build consistent and low latency and highly accurate solutions :)
Keep Going!!!
Introduction
I embarked on a journey to engage with my ex-colleague, who is currently a VP in a small industrial construction company, by providing AI advisory and learning sessions. Initially, it seemed like a promising exploration—teaching and guiding them through foundational concepts like LLMs, prompts, and RAG (Retrieval-Augmented Generation).
Early Teaching Phase
In the beginning, I had to explain the basics: what an LLM can do, what a prompt is, and how RAG techniques enhance information retrieval. After about a month, this ex-colleague returned, claiming there was negligible value and no tangible deliverables. This should have been a red flag, indicating that the cost and effort involved in teaching, training, and experimenting were not fully appreciated.
The Red Flags
In hindsight, I realize I should have caught the warning signs earlier. I kept insisting that experimentation was the key to understanding the capabilities and limitations of GenAI tools. Instead, this person seemed to push for more work in a shorter timeframe—a strategy to extract maximum value with minimal investment.
Shifting Roles and Promises
Later, I received an offer to join their team with a fixed pay and 5K shares, helping to architect solutions, pitch them to the market, and shape the product roadmap. The proposal seemed promising, aligning with my goal of taking on a more advisory and architectural role. Little did I know it was a tactic to consult, gain maximum value, and then part ways.
Building a Product and Architecture
As trust deepened—bolstered by a long-standing relationship spanning over a decade—we agreed on shares and informal terms. I invested significant effort: in training the team from scratch in LLM prompting, RAG, search customization, improving accuracy, data preprocessing, and system architecture. I demonstrated how to organize data effectively and leverage different approaches for better product outcomes. I also built a pitch deck, developed an API strategy, and created a technical feature roadmap.
The Unexpected Termination
Even as the product began to take shape, I was blindsided. Suddenly, he informed me they no longer required my services because they had found someone else to present the solution to the market. My requests for formal acknowledgments, like patents, were brushed aside. From the start, they had planned to offer low pay and shares, then terminate later. There was a clause stating that shares were invalid if I was no longer working for them—a clever strategy of betrayal. This was a person I had known for 14 years. It’s a stark reminder of what even people you know well can do.
Lessons Learned
This experience taught me that trust should be tempered with caution. Even long-standing relationships can falter when values, mindsets, and ethics come into play. Nonetheless, the knowledge I gained—developing product pitches, architectures, and end-to-end solutions—are useful for my current customers :), whether they need unstructured ETL solutions, industrial RAG systems, or tailored recommendation engines. Everything that broke you, builds you even stronger in the next epoch.
Always approach advisory roles with careful consideration and safeguards in place, no matter the length or depth of prior relationships. While losing out can hurt, the experience and skills you acquire will benefit you and your future clients.
My Advice
Keep Going!!!
#HumanwrittenAIEdited #Perspectives #GenAI #Myworkperspectives
Meta rolls out internal AI tool as it pushes into business market
Automation, Assistance, Copilot = Metamate
Keep Thinking!!!Geoffrey Hinton: the future with smarter AI is unpredictable
— Haider. (@slow_developer) December 14, 2024
We're entering an era of uncertainty when we start dealing with things as intelligent or more intelligent than us.
Then, we have no idea what's going to happen. pic.twitter.com/a4rX8eZqdW
Klarna CEO stopped hiring employees last year because “AI can do most jobs” and is down -22% headcount to 3500.
— Deedy (@deedydas) December 15, 2024
The private Swedish buy-now-pay-later company is worth $14.6B, up from $6.7B in 2022 but down from $45.6B in 2021 and is trying to go public next year. pic.twitter.com/P0eFADQqUS
Productivity Improvements vs Job Pressure vs Job Cuts
This is the reason we need Responsible AI Adoption!!!
Satya Nadela explains the AI Agentic Future.
— Rohan Paul (@rohanpaul_ai) December 14, 2024
The business logic is all going to these Agents.
-----
Video from Bg2 Pod Youtube Channel (link in comment) pic.twitter.com/CCzcvPvmZ5
From #Swartz to #Balaji: history shows the cost of inaction. AI needs #transparency & oversight now. We need: - Mandatory AI #data disclosure - Fair creator compensation - Clear #copyright standards #Policymakers #AIPolicy #TechEthics #AIEthics #ResponsibleAI
What happened to #suchir 👀‼️
— @SakashiNakamoto (@SakashiNakamoto) December 14, 2024
"Suchir Balaji, former OpenAI researcher and whistleblower, tragically passed away on November 26, 2024, at the age of 26. Known for his role in highlighting legal and ethical concerns regarding AI's data use, Balaji’s passing has left the tech… pic.twitter.com/gCb9cdnaLQ
I recently participated in a NYT story about fair use and generative AI, and why I'm skeptical "fair use" would be a plausible defense for a lot of generative AI products. I also wrote a blog post (https://t.co/xhiVyCk2Vk) about the nitty-gritty details of fair use and why I…
— Suchir Balaji (@suchirbalaji) October 23, 2024
Some Things to Deep Dive / This should not have happened
Optimus can now walk on highly variable ground using neural nets to control its electric limbs.
— Elon Musk (@elonmusk) December 9, 2024
Join @Tesla if you want to work on interesting real-world AI systems. https://t.co/C8J90Age5Y
Keep Exploring!!!
Steve Jobs On How To Resolve Conflict:
— Big Brain Business (@BigBrainBizness) December 11, 2024
(1) Don't try to get someone to 'buy into a decision' because you should hire people so phenomenal they tell YOU what to do. Thus, if you're trying to get them to buy into a decision, you've hired them to do what they think is right (likely… pic.twitter.com/8h26DaVr1u
Keep Thinking!!!
“You age it, you age it, you age it…”@satyanadella just explained the business model for AI to everyone in plain sight.. holy shit pic.twitter.com/Fss6jR2vZ0
— JJ (@JosephJacks_) December 13, 2024
Agentic World
Two New Job Directions
Happy Agentic Solutions.
Solving a problem in one hour often comes from countless hours of background work leading up to that moment.
Understand your pricing model and price wisely.
Keep Going!!!
"Best practices" is a very common jargon, and we need to understand its meaning.
Understanding "best practices" is about recognizing its role in improving processes and outcomes across domains.
In my early career, when I used to write code, the review comments I would receive were, "This is not following best practices; go and check." So, there, you gain some awareness of IDE tools, language, and coding approaches.
Early exposure to "best practices" builds foundational skills in tools, languages, and methodologies, shaping your problem-solving approach.
Now, working in a startup, best practices are constrained to a few things:
In startups, constraints like budget, cloud resources, and available talent redefine how "best practices" are applied.
The architecture needs to be cost-effective. Startups often run on a very tight budget, so you need to be frugal. You need to make things work, and for every backup or option, you need to pay. Until you reach a certain stage, most startups may rely on credits or cohorts from cloud providers, so it's always about leveraging all of this.
"Do what you can, with what you have, where you are." – Theodore Roosevelt
Cost-effective architecture demands frugality, creative problem-solving, and strategic leveraging of resources like cloud credits.
Additionally, you may not get top-class talent, and people don't stay for various reasons. It's not just about money. Of course, money is one part of it, but factors like learning opportunities, culture, and trust also matter. I've been working with many freshers, and there's often a knowledge gap between someone from a high-profile institution and someone from a tier 2 or tier 3 college. But it's an investment of time, effort, trust, and mentoring. Things don't happen magically, but by being there, guiding, troubleshooting, and taking it one step at a time, progress happens.
Building talent in startups requires investing in mentorship, bridging knowledge gaps, and fostering trust and a growth mindset.
"An investment in knowledge pays the best interest." – Benjamin Franklin
In the rapidly evolving world of artificial intelligence, businesses face a multifaceted challenge when it comes to AI adoption. The decision to build or buy, to hire directly or outsource, and to choose the right use cases are critical and can significantly impact the success of AI integration within any organization.
🔍 Key Considerations:
1. Cloud Partnerships: Aligning with a cloud provider can dictate the models and technologies available to you. It's essential to leverage these partnerships effectively to maximize your AI capabilities.
2. Use Case and Data Availability: Choosing the right use case is just the beginning. The availability and adequacy of data for model training or fine-tuning are paramount. Without sufficient data, even the most promising AI projects can falter.
3. Model Development Timeline: Whether it's benchmarking, extended testing cycles, or A/B testing, understanding the time required to develop and refine AI models is crucial for planning and execution.
4. Costs and Talent: The infrastructure and talent costs can often lead businesses to outsource AI and machine learning tasks. However, this brings its own set of challenges and dependencies.
5. Accuracy and Maintenance: Developing AI models that not only perform well initially but also maintain high accuracy over time requires continuous updates and skilled personnel.
6. Ethical AI: Adopting AI responsibly ensures that the technology not only serves the business goals but also aligns with broader ethical standards.
🌟 Solution Spotlight:
Innovative solutions like vector search, keyword search, semantic search, or rule-based search can address specific needs, but success fundamentally depends on the right blend of talent, technology, and timing.
As we continue to embrace AI, let's discuss how we can overcome these challenges through innovative strategies and collaborative efforts. How is your organization navigating these complexities in AI adoption?
Share your insights!
#AI #BusinessStrategy #Innovation #DataScience #CloudComputing #EthicalAI
Prompts Can Be as Valuable as Code
Well-crafted prompts are just as important as writing clean code, especially when versioning them. A good prompt is optimized for token usage, model compatibility, chunk sizes, and temperature settings, ensuring efficiency and performance. These parameters may need to be adjusted based on the type of document, text, or context being handled, making prompt versioning a critical practice.
Key Tools and Features for Prompt Management
Langfuse
Prompthub
Proper prompt versioning, coupled with tools like Langfuse and Prompthub, ensures optimal performance and adaptability across use cases
Keep Exploring!!
Coco Cola Old Ad
Back in 1995, Coca-Cola launched their famous “Holidays Are Coming” ad.
— Salma (@Salmaaboukarr) November 20, 2024
It showed a convoy of Christmas-lit trucks arriving in a snowy town, spreading joy.
It became a classic and made Coca-Cola a big part of holiday traditions. pic.twitter.com/S5KAi5SqgM
What is Generative AI?
— Salma (@Salmaaboukarr) November 20, 2024
It’s advanced tech that creates text, images, audio, and video.
Coca-Cola used it to:
• Capture the nostalgic essence of the 1995 ad
• Reimagine it with modern visuals
• Create something that couldn’t have been done before pic.twitter.com/tDvmGElFUM
In our fast-paced world, the allure of convenience often overshadows the hidden costs associated with it, particularly in the realm of online food delivery services like Instamart and others. While these services offer quick solutions to our daily needs, it's crucial to pause and consider the broader implications of their use.
🔍 *Quality and Health Concerns:*
Many of these platforms may lack stringent quality checks, especially for perishable items that endure various stages of the supply chain. The absence of transparency about food sources, shelf life, and kitchen standards raises significant health concerns. The convenience of having food delivered to your doorstep might seem appealing, but it could lead to health issues if the food's quality and handling are compromised.
🌍 *Environmental and Social Impact:*
The rise in quick deliveries contributes to increased pollution and traffic congestion. Moreover, the shift towards consumer convenience overlooks the potential for physical activity, such as walking to a nearby store, which can be beneficial for both health and the environment.
💸 *Economic Considerations:*
Opting for nearby eateries or cooking at home not only ensures a better understanding of what you consume but can also be more economical in the long run. The costs associated with frequent use of delivery apps add up, and the perceived convenience might not justify the expense.
🤖 *Technological Implications:*
While technology drives innovation in delivery methods, including potential shifts to drone deliveries, it's essential to question whether these advancements contribute to meaningful knowledge growth or merely support a consumerist mindset focused on profit.
👨🍳 *A Call to Action:*
Let's advocate for more transparency and responsibility in the food delivery industry. By choosing more sustainable and health-conscious options, we can drive change that benefits not just individual consumers but also the broader community.
🌟 *Your Health, Your Choice:*
Next time you're about to order from a food app, consider the potential long-term benefits of alternative options like a simple home-cooked meal or a visit to a local restaurant. It's not just about saving time; it's about investing in your health and our planet.
#FoodIndustry #HealthAndWellness #SustainableLiving #TechnologyImpact #ConsumerAwareness
• AI systems teaching themselves
— Baptiste (@BaptisteVicini) November 19, 2024
• Models developing unexpected capabilities
• Programs showing signs of emergent behavior
But the scariest part? pic.twitter.com/AzeU7xA2bt
Very much focused agents on each Topic can make magic if trained well :)
Keep Exploring!!!
In all my GenAI product-building efforts, these questions consistently arise across various tasks: Data, ETL, Marketing, NER, Fashion, Design, and ESG.
There are different categories of people based on their work hours.
Some will work 100 hours, others will work 8 hours, and some will find a balance. Some work more when they find the work interesting. It's hard to say whether working 100 hours equals productivity or working 8 hours means mediocrity.
I suggest spending more time on activities you enjoy and less on those you don't. Remember, life is about choices and how you live each day. Personally, I need to read the same subject multiple times to understand it. This doesn't mean I do it all within 100 hours. My learning involves repeating experiments and gaining new perspectives each time.
Understanding a subject, connecting with it, and seeing it from different perspectives are unique learning moments. These cannot be measured simply by the hours spent. Instead of counting hours, focus on the new ideas you discover and how engaging they are. Ask yourself if you are connecting with your work and if your experiments are satisfying.
It's not just about money. Everything in life is finite. Evaluate whether you are productive and if your techniques are effective.
Thank you.!!!
Solutions can be built with different levels of accuracy/scalability based on Talent, Time, and Money
Steve Jobs: The difference between good people and great people in software is 50-to-1
— Startup Archive (@StartupArchive_) November 17, 2024
“I’ve always considered part of my job was to keep the quality level of people in the organizations I work with very high. I mean that’s what I consider one of the few things I can contribute… pic.twitter.com/NBlk3UoWSC
Keep Experimenting!!!
How to Implement the Use Case Correctly
Prediction:
OpenAI is set to shift towards domain-centric solutions, making 2025 a transformative year for AI. This transition is based on the data collected and learned from APIs serving different domains, focusing on context window improvements, reasoning patterns, and cross-modal integration. This will significantly enhance decision-making in critical sectors like FinTech and healthcare. By tackling technical challenges and integrating user feedback, these advancements will result in more powerful, tailored AI applications that will reshape entire industries.
Expanding Beyond Language Models
Today, OpenAI is primarily recognized as a leading provider of large language models, but its true capabilities extend much further. Its question-answering abilities, for instance, are exceptionally powerful and evolving rapidly. As clients integrate this technology into critical sectors like FinTech and healthcare, they will unlock new levels of context window improvements, and cross-modal integration and reasoning by adopting techniques like tree of thought, chain of thought, and graph-based approaches, enabling AI to think and deduce more effectively. Feedback from users will be pivotal in this journey, guiding organizations on how best to structure information flows and assess when to fine-tune models, use Retrieval-Augmented Generation (RAG), or determine the optimal use of short-term and long-term memory. This constant feedback loop will allow AI to achieve unprecedented levels of contextual understanding and adaptive reasoning, creating models that align more closely with complex real-world needs.
"OpenAI's journey is no longer just about language—it's about thought and contextual adaptation."
Building Resilient and Adaptive Systems
These advancements will likely lead to the development of more resilient and adaptable systems. Future systems will not only enhance decision-making but also push reasoning capabilities into new territories, setting the stage for increasingly sophisticated agents and refined RAG architectures. These improved architectures are expected to reduce hallucinations, boost accuracy, and lead to products that are more responsive to real-world challenges. Overcoming issues like catastrophic forgetting, hallucinations, and knowledge manipulation will be critical, positioning these systems as robust, reliable solutions across industries.
"Resilient, adaptive AI systems will transform decision-making and redefine industry standards."
Addressing Technical Challenges
Currently, accuracy challenges remain in areas such as domain-relevant embedding, balancing retrieval techniques against accuracy and latency, chunking methods based on usage or query types, contextualization, and routing or re-ranking processes. Yet, these elements are essential for advancing the capabilities of AI models. Despite these ambiguities, ongoing data processing and analysis are paving the way for more focused, domain-specific AI products. Within the next six to eight months, we’re likely to see a new wave of AI-driven applications, from highly specialized agents to RAG applications and APIs crafted for specific industries.
"Technical hurdles are simply steps toward the next wave of AI-driven, domain-specific innovation."
The Transformative Potential of 2025
The year 2025 is set to be a pivotal moment in AI, marking the dawn of domain-centric solutions that will reshape how AI interacts with our world. As more industry-specific applications emerge, OpenAI’s technologies will bring powerful, tailored solutions closer to reality.
"2025: The year AI becomes truly domain-centric, reshaping industries with precision, customized models, and highly accurate agents and RAG systems."
Keep Exploring!!!
#AI #OpenAI #DomainSpecificAI #Innovation #MachineLearning #FinTech #Healthcare #FutureOfAI
AGI = Current AI methods + RHLF Experiments + Human Applied Fine Tuning + Custom Experiments + Ton of Guardrails + Domain Specifics Pattern Ingestion
Ilya Sutskever had a TED talk this year and predicted the two most important concepts that we haven't discussed much:
— Haider. (@slow_developer) October 19, 2024
• AGI will have the ability to improve itself
• AGI will work on the next generation of AGI pic.twitter.com/CMvmgTc0oz
Keep Exploring!!!
A troubling case study emerges from the experience with Character.AI, a role-playing application that allows users to create and interact with AI personalities. What began as casual interaction for one student, Sewell, evolved into a concerning pattern of emotional dependency that exemplifies the risks of unmoderated AI engagement:
The student developed an intense emotional attachment to an AI character named Dany
AI Companion Market = Unethical misuse
The Dangerous Reality of AI Companionship Apps: Hidden Threats 🚨
AI Companion Market size was valued at USD 196.63 Billion in 2023 and is projected to reach USD 279.22 Billion by 2031, growing at a CAGR of 36.6% during the forecast period 2024-2031. (Stats)
Warning: Unregulated profits driving dangerous innovation
Without immediate, strict #regulatory action, we risk a global mental health crisis.
#AIRegulation #AIEthics #GenAI #AIRisks #TechPolicy #ResponsibleAI #EthicalAI
Ref - Link1, Link2, Link3, Link4
Keep Thinking!!
After transitioning from consulting to GenAI projects, I've noticed some fascinating shifts in approach and mindset. Here's what I've learned:
🎓 Education is Key
🎯 Problem-First Approach
⚙️ Building for Scale
🚀 Product Development Reality
🔄 Blended Role Benefits
Workshop facilitation
#GenAI #ProductDevelopment #AI #Innovation #StartupLife #TechTransition #Leadership
Keep Exploring!!!
As someone in the trenches of data science hiring for over 7 years, I've watched our field transform dramatically. Recently, a job description for an ML role caught my eye - and not necessarily in a good way. It got me thinking about how our industry's hiring practices often need to catch up to the reality of our work. Let me share some observations:
The Commodity of Code
The Kitchen Sink JD
The Interview Gauntlet
The Missing Pieces
A Call for Pragmatism
Instead, focus on core competencies that drive real value:
The ML landscape is changing faster than ever. Our hiring practices need to keep pace. Let's move beyond the "code on a whiteboard" era and design processes that identify true innovators who can propel our field forward.
Another Good Read - Why We Don't Interview Product Managers Anymore
Keep Exploring!!!
Technology will continue to change the world. A thoughtful approach is needed to prioritize use cases that offer broader positive impacts over those that primarily lead to monetization. This way of thinking can help align AI adoption with human values and ensure a more substantial positive impact on humanity.
Keep Going!!!
Prompt Caching Analysis
Caching is enabled automatically for prompts that are 1024 tokens or longer.
Prompt Caching is enabled for the following models:
Usage Guidelines
1. Place static or frequently reused content at the beginning of prompts: This helps ensure better cache efficiency by keeping dynamic data towards the end of the prompt.
2. Maintain consistent usage patterns: Prompts that aren't used regularly are automatically removed from the cache. To prevent cache evictions, maintain consistent usage of prompts.
3. Monitor key metrics: Regularly track cache hit rates, latency, and the proportion of cached tokens. Use these insights to fine-tune your caching strategy and maximize performance.
Keep Exploring!!!
It's been over a year since my Retail adoption #GenAI use case went live for a leading U.S. specialty retailer on August 17, 2023.
The results? Double-digit improvement in user page conversions! 📈
🔍 Key Insights:
Innovation Over Integration: Initially, I didn't showcase #GenAI use cases to the #CTO. Instead, I presented a #ReimaginedWorkflow for:
Rethinking AI/ML Implementation: It's not about wrapping AI around existing processes. True impact comes from:
Success Factors in Production:
Beyond Demos: Real adoption comes from solving genuine user needs, not just showcasing capabilities.
#AIStrategy #RetailInnovation #DataDrivenDecisions #DigitalTransformation #AIAdoption #TechLeadership #InnovationInRetail #AISuccessStory
Who else is seeing success with GenAI in their industry? Let's discuss! 👇
In the rush for AI dominance, we're neglecting crucial ethical foundations. Here are 4 pillars often sacrificed at the altar of profit:
These policies will shape the future of AI. We can't wait - the time to act is NOW.
🧠 Data Sovereignty Rights
🏥💰 Regulatory Oversight in Critical Sectors
📚 Transparent AI Lineage
🎨 Recognition of Human Creativity
These aren't just ethical niceties – they're essential for sustainable, trustworthy AI. Short-term profits shouldn't overshadow long-term societal impact.
We need AI that serves humanity, not just shareholders. It's time to realign our priorities. Which pillar do you think is most crucial? Why is it being overlooked?
#AIEthics #ResponsibleAI #TechForGood #AIAccountability #DigitalRights
Tag a tech leader who needs this wake-up call 👇
Keep Advocating ResponsibleAI!!!
TechForGood vs TechForProfits !!!
In the realm of information retrieval and artificial intelligence, Index RAG (Retrieval-Augmented Generation) has emerged as a powerful technique. To fully grasp its potential and limitations, it's crucial to understand the distinction between data storage and retrieval, particularly in the context of indexing strategies. This post will explore two different indexing approaches and their implications for handling queries, especially multipart questions.
The Indexes
Index 1: Broad and Diverse
Composition: 20 pages from history + 20 pages from geography + 20 pages from maths
Strengths:
Index 2: Deep and Focused
Composition: 200 pages focused solely on history
Strengths:
Trade-offs
Breadth vs. Depth
Complexity of Queries
Information Quality
Challenges with Multipart Questions
Consider a multipart question involving history and mathematics:
Using Index 1:
Using Index 2:
Implications for RAG Systems
Query Processing:
Content Generation:
System Architecture:
Conclusion
The choice between a broad, versatile index (Index 1) and a deep, focused index (Index 2) significantly impacts the retrieval effectiveness of an information system. Understanding these dynamics is crucial for users and developers alike to create effective RAG systems.
When designing or using RAG systems, consider:
By carefully weighing these factors, one can optimize the balance between data storage and retrieval capabilities in Index RAG systems, ultimately enhancing the quality and relevance of generated responses.
Creative and Learning Use Case
Instead of playing sports or watching cartoons, kids nowadays are coding. 👀 pic.twitter.com/POX50bt4MH
— Vivek Naskar (@vivek_naskar) September 8, 2024
Wrong Guardrails Applied, Content for opinions
What other options
Keep Going!!!
ETL and data pipelines are redefined in #GenAI Applications. Your #ETL now will support
Freedom entails risks, but it's worth it. You pave your own path.
Sure, feel free to ping me if you are interested in harnessing the power of AI.
Keep Exploring!!!
When learners can see 'Intelligence with GenAI'. It is very heartening to see solutions built during the session :)
Some feedback after the 12-hour GenAI session:
Benchmark against domain dataset
Good Data = Good Strategy = Quality Experiments
Happy Low Latency!!!
What is required to Turn Data Into AI Products ? - My perspectives
The main reasons cited are:
So, the lesson is, my perspectives are:
Keep Exploring!!!
While evaluating answers: Some candidates document well, attempt, and submit answers but miss the basics. This reflects both intent and missed guidance in learning. High potential is evident, but basics are either overlooked or dot-connecting skills are lacking.
While teaching: Some PhD/lateral folks tend to generalize everything or focus on proving theories break. Learning is not about proving your knowledge but about gaining a balanced perspective. One class is not sufficient to judge anything. Observing these types of learners makes me feel sad as they are so short-sighted.
Education is not mindset; experience does not mean competency!!!
Keep Exploring!!!
Memorization vs Generalization
When you develop #GenAI apps, After a certain stage, When things work fine, The immediate next question is
I don't want my life to be memorization - Company1 - Company2 ..., Exploring out of comfort zones provides diverse perspectives.
Earlier I had time to regret, Now I don't have time to think about anything. A long day of managing and solving different problems and different lenses of execution. Sometimes some experiences don't fill your pocket but fill your soul. In the end, I want to smile at death, I have tried all my wishlists.
Keep Exploring!!!
In the digital age, managing customer data can be a daunting task. Here are some points to consider:
No data is pristine; changes should not be spontaneous. When dealing with data, especially on a large scale, it's imperative to have a robust process in place. Having a process to automate cleanup is essential to scaling your solution.
Keep exploring!
There is some leftover space for Graph
Keep Exploring!!!
Some days are Sigmoid. Some days are Relu. Novelty is not inventing new stuff but stitching the right techniques in the right proportion.
A few more I relate to my work style :)
Keep Going!!!
Txt2SQL is easier in straightforward examples, Real database has a ton of complications
Example-
Columns can be generic, Attribute1, Attribute2, We may use Attribute1 for key, Attribute2 for Value. A ton of learning working on it, still trying to get a hold :)
Keep Exploring!!!
LLM generation kids / learning using LLM products will have a different perspective of thinking / before and after ChatGPT :)
Some questions and answers take days or weeks, and sometimes the approach moves from LLM to NLP, It's a blend of techniques to make things work.
A lot of challenges but one at a time, Balancing Consistency, Accuracy, and Latency. If you want to solve real problems you can connect/explore potential learning experimentation opportunities / dedicate some learning hours. Please drop a note to career@proplens.ai
#learnings #NLP #Datascience #RAG #LLMs #perspectives #Datascience
Keep Exploring!!!
In one particular use case, it's a constant process of experimentation and iterations.
Some successes, some lessons, and some learning.
Keep Exploring!!!
Over the past 4 months, I've been working with really small teams, and the difference in communication dynamics compared to larger teams has been striking.
In my previous roles in product and consulting, I achieved success but with a considerable amount of effort spent convincing and negotiating with numerous people. Here are some of the specific challenges faced when pushing ideas in larger, more mature companies compared to nimble startups:
Increased Lines of Communication: With more team members in mature companies, ensuring everyone is on the same page becomes significantly harder. There's a higher risk of changes in approach, iterations, feedback, and information being lost or mistranslated as it travels through various levels. In contrast, startups often have flatter structures, making communication more direct and less prone to distortion.
Slower Decision-Making Processes: Larger teams often have more layers of approval, which can slow down decision-making. Every stakeholder has their own priorities and concerns, adding to the complexity. Startups, with their smaller teams, can often make decisions more quickly, which allows for faster iterations and innovation.
Greater Need for Consensus: In smaller teams typical of startups, reaching a consensus or getting buy-in for new ideas is often easier. Larger teams in mature companies require more effort to align everyone's visions and goals. This can lead to lengthy discussions and compromises, which may dilute the original idea.
More Stakeholders to Convince: Larger teams come with more stakeholders, each with their own perspectives and interests. This multiplicity can make it challenging to get everyone on board with a new idea. Startups, on the other hand, usually have fewer stakeholders, and the founders or key decision-makers are more accessible, simplifying the process of getting buy-in.
However, the journey you take, whether in a startup or a mature company, will reward you for the risks and decisions you choose to travel with. Each environment has its own set of challenges and rewards, but understanding these dynamics can help in navigating them more effectively.
Keep Going!!!
#LLM is #Cool! #PromptEngineering is great. Demonstrating a prototype is beneficial, but for a production application, preparing data and consistently getting Top K results are essential. The ability to handle diverse queries, manage large datasets vs. remembering minimal key data are significant trade-offs and design choices that come with domain knowledge. Building a working solution is easy, but achieving predictable, low-latency, and consistent behavior requires constant iteration and evolution. #Perspectives and #AppliedLearning are key. #AI #MachineLearning #DataScience #Prototypes #ProductionReady #DataPrep #Latency #Consistency #Iteration #
Keep Exploring!!!
How Alexa dropped the ball on being the top conversational system on the planet
— Mihail Eric (@mihail_eric) June 11, 2024
—
A few weeks ago OpenAI released GPT-4o ushering in a new standard for multimodal, conversational experiences with sophisticated reasoning capabilities.
Several days later, my good friends at PolyAI…
There's a big difference between solving a problem from first principles vs applying a solution template you previously memorized. It's like the difference between a senior software engineer and a script kiddie that can't code.
— François Chollet (@fchollet) June 10, 2024
A script kiddie that has a gigantic bank of scripts… https://t.co/l2W8FUnwj8
Keep Exploring!!!
My take from video - Link
Phase I: The Importance of Titles
Phase II: Embrace Risk on the Road to Titles
Phase III: Deliver Impact, Titles Will Follow
#humanoids = #llm powered decisions + #robotics + custom models trained for specific tasks, Build custom models to suit specific needs https://t.co/sxcaqCk46U
— Siva (@sivaram2k10) May 12, 2024
Keep Exploring!!!
For questions/feedback/career opportunities/training / consulting assignments/mentoring - please drop a note to sivaram2k10(at)gmail(dot)com
Coach / Code / Innovate