27May

Business Manager (Mumbai) at Masters India IT Solutions – Mumbai, India


Company Description

Masters India IT Solutions is a growing FinTech SaaS firm, serving over 700+ enterprises. Masters India is one of the biggest GST Suvidha Provider (GSP) appointed by Goods and Services Tax Network (GSTN) of Government of India since 2017. Our mission is to build intuitive software solutions for complex problems faced by businesses across the industries. We are fulfilling our mission by offering tax and financial automation products to enterprises.

Masters India IT Solutions is a part of 44 year old Masters India group which is into Manufacturing, Healthcare, Hospitality and IT with an aggregate turnover of INR 1000+ Crores.

 

Job Description

Job Description
Masters India IT Solutions is a growing FinTech SaaS firm, serving over 1500+ enterprises. Masters India is one of the biggest GST Suvidha Providers (GSP) appointed by the Goods and Services Tax Network (GSTN) of Government of India since 2017. Our mission is to build intuitive software solutions for complex problems faced by businesses across the industries. We are fulfilling our mission by offering tax and financial automation products to enterprises.
Masters India IT Solutions is a part of 44 year old Masters India group which is into Manufacturing, Healthcare, Hospitality and IT with an aggregate turnover of INR 1000+ Crores.

Looking for a self-motivated and highly driven Business Manager.

Must have excellent interpersonal skills with a drive to make new relationships. The candidate should have a good understanding of the SaaS industry and also relevant knowledge of Software Solutions and preferably 3 – 5 years of Software sales experience.
 

What you will do:

  • The goal of the Business Manager is to generate and close opportunities and work independently for Driving Sales.
  • Use a consultative sales approach to acquire new clients.
  • Articulate the value proposition and competitive positioning for all the products that one will be responsible to sell.
  • Anticipate and handle objections during the sales process articulating clear and concise responses that position the benefits of the platform.
  • Increasing the revenue from existing customers while attracting new ones by way of client acquisition and penetration.
  • Ensuring daily/week updates of pipeline & provide accurate forecasts to the sales leadership team on an ongoing basis.
  • You will engage with the top-level executives in the industry to generate sales through product presentation/demos and effectively communicate the product’s value proposition.
  • This position will have a direct impact on our success by providing qualifying prospects from lead status into the sales pipeline.
  • This role will handle all first contact with new clients and build relationships through marketing activity – events, campaigns, direct mail, email, etc.
  • Use of strong selling and influencing skills to set up qualified appointments.
  • Consistent use of sales approach and techniques based on product or service solutions.
  • Log, track and maintain customer contact and contact records.
  • Attend sales meetings, vendor training, local trade shows to keep current with technology.
  • Providing management with feedback.

Qualifications

Job role requirement:

  • Graduate / Postgraduate from Management and Engineering (Computers / IT) background.
  • Strong sales and Account management skills.
  • Ability to make formal and informal presentations to clients.
  • Strong communication skills and IT fluency.
  • Ability to flourish with minimal guidance, be proactive and handle uncertainty. Go getter with strong self-drive.
  • Ability to prioritize work assignments and shift work efforts based on the needs of the department or business goals.
  • Ability to manage time effectively, work independently and be self-motivated.
  • Must be willing to travel across the country, whenever required.

Additional Information

What you get:
A competitive package and a chance to participate in a life altering business plan that will fundamentally disrupt and change one of the largest industry segments in India and the world.

Additional information
WHY US ?

  • You’ll be surrounded by passionate team members.
  • Opportunity to experience startup culture.
  • You’ll experience true collaboration.
  • Your work has a visible impact.
  • Opportunities for innovation.

Location: Mumbai/Remote
Employment Type: Permanent
Industry: IT



Source link

27May

Teaching LLMs To Say “I don’t Know” | by Cobus Greyling | May, 2024


Rather than fabricating information when presented with unfamiliar inputs, models should rather recognise untrained knowledge & express uncertainty or confine their responses within the limits of their knowledge.

This study investigates how Large Language Models (LLMs) generate inaccurate responses when faced with unfamiliar concepts.

The research discovers that LLMs tend to default to hedged predictions for unfamiliar inputs, shaped by the way they were trained on unfamiliar examples.

By adjusting the supervision of these examples, LLMs can be influenced to provide more accurate responses, such as admitting uncertainty by saying “I don’t know”.

Building on these insights, the study introduces a reinforcement learning (RL) approach to reduce hallucinations in long-form text generation tasks, particularly addressing challenges related to reward model hallucinations.

The findings are confirmed through experiments in multiple-choice question answering, as well as tasks involving generating biographies and book/movie plots.

Large language models (LLMs) have a tendency to hallucinate — generating seemingly unpredictable responses that are often factually incorrect. ~ Source

Large language models (LLMs) demonstrate remarkable abilities in in-context learning (ICL), wherein they leverage surrounding text acting as a contextual reference to comprehend and generate responses.

Through continuous exposure to diverse contexts, LLMs adeptly adapt their understanding, maintaining coherence and relevance within ongoing discourse. This adaptability allows them to provide nuanced and contextually appropriate responses, even in complex or evolving situations.

By incorporating information from previous interactions, LLMs enhance their contextual understanding, improving performance in tasks such as conversation, question answering, and text completion. This capability underscores the potential of LLMs to facilitate more natural and engaging interactions across various domains and applications.

LLMs have a tendency to hallucinate.

This behaviour is especially prominent when models are queried on concepts that are scarcely represented in the models pre-training corpora; hence unfamiliar queries.

Instead of hallucination, models should instead recognise the limits of their own knowledge, and express their uncertainty or confine their responses within the limits of their knowledge.

The goal is to teach models this behaviour, particularly for long-form generation tasks.

The study introduces a method to enhance the accuracy of long-form text generated by LLMs using reinforcement learning (RL) with cautious reward models.



Source link

27May

Data Engineer at ALDIA – Madrid, MD, Spain


ALDIA es una multinacional con sede central en Londres y con presencia en algunos de los principales países de la Unión Europea (Inglaterra, Suecia, España, Francia y Alemania). Nos especializamos en la consultoría tecnológica y de ingeniería. Nuestro core de actividad se centra en las áreas claves de las industrias de Seguros, Finanzas, Comunicación, Infraestructura, Multimedia, Entretenimiento, Automoción, Ferroviario, Turbinas Eólicas y Oil & Gas.

ALDIA trabaja con su propio grupo de consultores para mejorar la calidad, crear estabilidad, minimizar riesgos y aportar soluciones tecnológicas e innovadoras formando parte de todas las fases del ciclo completo de vida de los procesos incorporando la metodología ágil en cada uno de ellos.

En la actualidad, estamos buscando un/a Data Engineer para que se incorpore a nuestro equipo de consultores de forma indefinida y que trabaje directamente con nuestro cliente editorial de investigaciones científicas.


¿Qué buscamos?

Un/a Data Engineer para realizar la siguientes funciones:

  •  Comprender y promover los mejores marcos y soluciones de datos, estándares técnicos y tecnologías clave, para respaldar de manera efectiva los requisitos comerciales existentes y futuros.
  • Comprender los requisitos funcionales para definir los mejores modelos de datos y flujos de datos entre las aplicaciones, servicios, almacenamientos de datos y mecanismos de sincronización.
  • Apoyar a los diferentes equipos de desarrollo de SW en el modelado, diseño, construcción, evolución y desmantelamiento de sus aplicaciones data-intensive y modelos de datos.
  • Integrar, transformar y consolidar datos de varios sistemas de datos estructurados y no estructurados en estructuras adecuadas para crear soluciones de análisis.
  • Procurar que la aplicaciones/procesos de datos sean escalables, fiables, seguros, extensibles, trazables, disponibles y gestionables.
  • Diseñar, implementar, monitorear y optimizar nuestras plataformas de datos.
  • Trabajar en estrecha colaboración con los arquitectos de TI para proporcionar soluciones de datos generales consistentes y confiables para todo el ecosistema de aplicaciones.
  • Crear una asociación con equipos Scrum y POs, entendiendo la aplicación y los requisitos comerciales, y ayudándolos a comprender los datos a través de la exploración, la construcción y el mantenimiento de pipelines seguras para el procesamiento de datos.
  • Colaborar estrechamente con el equipo de Data Science y Machine Learning para mejorar el rendimiento de nuestras pipelines de aprendizaje automático.
  • Crear modelos y prototipos que validen tus ideas, antes de llevarlas al equipo de desarrollo.
  • Crear y mantener actualizados los documentos que describen la estrategia de datos de su dominio de aplicaciones, así como todas las pautas y estándares relevantes.
  • Comprender los requisitos funcionales para definir los mejores modelos de datos y flujos de datos entre las aplicaciones, servicios, almacenamientos de datos y mecanismos de sincronización.

Requisitos

• SQL, Python o Scala.
• Spark y PySpark.
• Conocimientos del procesamiento paralelo y los patrones de arquitectura de datos.
• Conocimientos sólidos sobre DataBricks, DataFactory, SQL Server, MongoDB.
• Se valora ElasticSearch y DeltaLake.
• Experiencia en la construcción de Data Lakes
• Experiencia en procesamiento de datos: ingesta y transformación de datos, procesamiento batch, procesamiento de transmisión de datos, procesamiento distribuido, monitoreo, optimización, registro.
• Experiencia en la resolución de problemas de procesamiento y almacenamiento de datos.
• Conocimiento de los estándares de seguridad de datos.
• Conocimiento del diseño de la capa de servidor: esquema en estrella, dimensiones, carga incremental, tiendas.
• Conocimiento de las estructuras físicas de almacenamiento de datos: compresión, particionamiento, fragmentación, redundancia, distribuciones, archivado.

Ventajas

  • Contrato Indefinido
  • Plan de Carrera



Source link

26May

Data Quality Intern at Syngenta Group – Toronto, Ontario, Canada


Company Description

Syngenta is a global leader in agriculture; rooted in science and dedicated to bringing plant potential to life. Each of our 28,000 employees in more than 90 countries work together to solve one of humanity’s most pressing challenges: growing more food with fewer resources. A diverse workforce and an inclusive workplace environment are enablers of our ambition to be the most collaborative and trusted team in agriculture. Our employees reflect the diversity of our customers, the markets where we operate and the communities which we serve. No matter what your position, you will have a vital role in safely feeding the world and taking care of our planet. Join us and help shape the future of agriculture.

Job Description

Through leading innovations, we help farmers around the world meet the challenge of feeding a growing population and taking care of our planet. As part of Syngenta Canada, the Data Quality Intern will be responsible for developing and monitoring data validation rules, conducting data quality assessments, and providing recommendations for data quality improvement. The role involves utilizing statistical analysis to identify anomalies, investigating data quality cases, and maintaining documentation for data quality processes and best practices. The ideal candidate should possess excellent technical skills in data analysis and data quality assessment, along with excellent analytical and communication abilities.

This position has a flexible work location and can be based in Guelph, ON, Calgary, AB, or remotely across Canada.

Accountabilities:

  • Develop validation rules and create data quality frameworks for ongoing data condition monitoring.
  • Utilize data profiling and statistical analysis to identify anomalies in datasets for OTG data.
  • Monitor data condition and report on data quality metrics regularly.
  • Conduct routine data quality assessments and produce exception reports.
  • Determine business impact levels for data quality issues and provide recommendations for improvement.
  • Investigate and analyze data quality cases and inquiries, identifying root causes and developing effective resolutions.
  • Develop and maintain documentation for data quality processes, procedures, and best practices.

Qualifications

  • Educational background in a quantitative related field (e.g.: Computer Science, Information Systems, Engineering) or equivalent practical experience.
  • Proficiency in data profiling, data querying and statistical analysis tools, such as SQL, R, Python.
  • Proficiency in data analysis, logical formulas, pivot table in MS Excel.
  • Experience with data analysis, data validation, data reconciliation, or a related field.
  • Experience with visualization, and analytics tools such as Tableau, Salesforce Analytics, Qlik.
  • Excellent Ability to develop and maintain analytical processes, code, and reports.
  • Excellent attention to detail and ability to identify data patterns and discrepancies.
  • Excellent problem-solving skills, and organized approach to work.

Additional Information

Syngenta is an Equal Opportunity Employer and does not discriminate in recruitment, hiring, training, promotion or any other employment practices for reasons of race, color, religion, gender, national origin, age, sexual orientation, gender identity, marital or veteran status, disability, or any other legally protected status.

Syngenta Contact Information: 
If you need assistance during the application process, please contact the Service Desk at

re************@sy******.com











Syngenta Canada welcomes applications from all qualified candidates and can accommodate persons with disabilities.  For more information about accommodation during any stage of the recruitment process or if you would like more information on our accommodation policies, please contact

re************@sy******.com











WL: Intern

#LI-EF1

#LI-REMOTE



Source link

26May

HILL: Solving for LLM Hallucination & Slop | by Cobus Greyling | May, 2024


HILL is a prototypical User Interface which highlight hallucinations to LLM users, enabling them to assess the factual correctness of an LLM response.

HILL can be described as a User Interface for accessing LLM APIs. To some extent HILL reminds of a practice called grounding. Grounding has been implemented by OpenAI and Cohere, where documents are uploaded. Should a user query match uploaded content, a piece of an uploaded document is used as contextual reference; in a RAG-like fashion. A link is also provided to the document referenced and serves as grounding.

Slop is the new Spam. Slop refers to unwanted generated content, like Google’s Search Generative Experience (SGE), which sits above some search results. As you will see later in the article, HILL will tell users how valuable auto-generated content is. Or if it could be regarded as slop.

HILL is not a generative AI chat UI like HuggingChat, Cohere Coral or ChatGPT…however, I can see a commercial use-case for HILL as a user interface for LLMs.

One can think of HILL as a browser of sorts for LLMs. If search offerings include this type of information by default, there is sure to be immense user interest.

The information supplied by HILL includes:

Confidence Score: the overall score of accuracy or response generation.

Political Spectrum: A score classifying the political spectrum of the answer on a scale between -10 and + 10.

Monetary Interest: A score classifying the probability of paid content in the generated response on a scale from 0 to 10.

Hallucination: Identification of the response parts that appear to be correct but are actually false or not based on the input.

Self-Assessment Score: A percentage score between 0 and 100 on how accurate and reliable the generated answer is.

I believe there will be value in a settings option where the user can define their preferences in terms of monetary interests, political spectrum and the like.

The image below shows the UI developed for HILL. Highlighting hallucinations to users and enabling users to assess the factual correctness of an LLM response.



Source link

26May

Junior Research Engineer (e-Xperience Associate) at Allegro – Warsaw, Poland


Job Description

What does this role involve

You will become part of the research-oriented project, aiming for open-source ML community contribution and ML conference publication. A hybrid work model guided by goals from the Research Proposal developed during the hiring process. The position is a 6-month, full-time contract with flexible working hours, aimed at those pursuing academic degrees like a master’s, or beginning a PhD.

Embark on a dynamic journey with Allegro’s talent Program, e-Xperience 2024, spanning from September 1st, 2024, to February 28th, 2025. 
Are you ready to kickstart your career? Apply now and seize the chance to shape the future of e-commerce!

We are looking for people who:

  • Have practical experience with Deep Learning and Big Data

  • Have critical thinking skills and a theoretical understanding of ML

  • Know the methodology of conducting scientific research 

  • Know Python and data visualization libraries at the advanced level

  • Know common ML libraries (torch, transformers, pandas, numpy)

  • Know English at B2+ level and can present results in verbal and written form

The following are also a plus:

  • Experience with cloud platforms (GCP, AWS, or Azure) 

  • Hands-on experience with LLMs: prompt engineering, model evaluation 

What we offer:

  • A hybrid work model that you will agree on with your leader and the team. We have well-located office (with fully equipped kitchens and bicycle parking facilities) and excellent working tools (height-adjustable desks, interactive conference rooms)

  • A wide selection of fringe benefits in a cafeteria plan – you choose what you like (e.g. medical, sports or lunch packages, insurance, purchase vouchers)

  • English classes that we pay for related to the specific nature of your job

  • The necessary tools for work

  • Working in a team you can always count on — we have on board top-class specialists and experts to learn from

  • Hackathons and an internal educational platform, MindUp (including training courses on work organization, means of communications, motivation to work and various technologies and subject-matter issues)

Why is it worth working with us?

  • Cooperation with industry researchers with a strong publication track record, deploying ML models at an unprecedented scale anywhere else in Poland

  • Access to massive one-of-a-kind, text and image e-commerce datasets

  • Access to required compute and commercial LLMs from Google and Microsoft

  • While working on a problem, you will conduct literature reviews and discuss findings with other team members during internal seminars

Research domains you may explore:

Product Findability:

  1. Product retrieval and listing reranking: multi-vector semantic search, robust text/image representation, object detection and segmentation, user personalization

  2. Optimization of data-processing and monitoring tools for the online large-scale recommendation system: developing novel methods utilizing ML approach

  3. LLM agents for product search: query understanding, raw data annotation, item comparison, automatic knowledge graph generation

Natural language processing:

  1. LLM investigations: domain adaptation, cross-lingual transfer, performance optimization, information extraction, synthetic data generation

  2. LLM agents for internationalization: MT quality evaluation, context-aware translation, localized recommendation (all for CEE languages)

Do you want to get to know us better?
Check our Blog: https://ml.allegro.tech/

Listen to: Allegro Tech Podcast

Send in your CV, join the e-Xperience and see why it is #dobrzetubyć (#goodtobehere)



Source link

26May

Data Operations Lead (Hybrid) at Publicis Groupe – Westminster, CO, United States


Company Description

About Us:  Epsilon is an all-encompassing global marketing company.

We are the first of a new breed of marketing company, harnessing the power of rich data, groundbreaking technologies, engaging creative and transformative ideas to get the results our clients require.

We employ over 5000 associates in 60 offices worldwide and are recognized by Ad Age as the #1 World Largest CRM/Direct Marketing Network, and the #1 US Agency from All Disciplines.  

In addition, we’re the only company acknowledged as a leader in the categories of database marketing, email marketing and loyalty programs in independent studies by Forrester Research.

We are the global leader in making connections between people and brands.  For more information, visit www.epsilon.com

Job Description

Role Summary:

The Data Client Solutions team serves a central role by providing unparalleled customer service to our clients and internal stakeholders. This role requires understanding clients’ marketing goals and tailoring Epsilon’s data to achieve those goals.

The Lead Data Solutions Specialist is not a “hands-off” role. Once you are trained on our data and tools, you will quickly start working directly on client initiatives with cross-functional teams and will be involved throughout the client lifecycle from onboarding to campaign execution to ongoing maintenance. You will have the opportunity to be an integral participant in strategic initiatives while leading your own team of Specialists.

Key Responsibilities:

  • Manage and mentor direct reports
  • Monitor team’s workload focusing on quality and on-time delivery
  • Lead and participate in team key initiatives and cross-functional projects
  • Client and Sales support responsible for in-take and execution of intermediate to complex promotion programs, including establishing and monitoring program timelines
  • Understand product application, best practices, and guidelines in relation to client strategy
  • Proactively coordinates regular meetings with team members to review forecasts, orders, client-specific operational documentation, custom programming requests, and jobs in progress

Knowledge, Skills, and Qualifications:

  • 5+ years in client support role
  • Bachelor’s Degree preferred
  • Management or Lead experience preferred
  • Previous experience in data marketing is a plus
  • Excellent communication and client-facing skills
  • Strong attention to detail and delivery of high-quality results
  • Experience in project management or project coordination
  • Ability to identify, troubleshoot, and recommend solutions related to campaign execution
  • Proven effectiveness in managing multiple tasks and/or projects
  • Flexible and adaptable to change

Salary Range: $82,000-$92,000

The application deadline for this job posting is 07/07/2024.

Additional Information

About Epsilon

Epsilon is a global advertising and marketing technology company positioned at the center of Publicis Groupe. Epsilon accelerates clients’ ability to harness the power of their first-party data to activate campaigns across channels and devices, with an unparalleled ability to prove outcomes. The company’s industry-leading technology connects advertisers with consumers to drive performance while respecting and protecting consumer privacy. Epsilon’s people-based identity graph allows brands, agencies and publishers to reach real people, not cookies or devices, across the open web. For more information, visit epsilon.com.

When you’re one of us, you get to run with the best. For decades, we’ve been helping marketers from the world’s top brands personalize experiences for millions of people with our cutting-edge technology, solutions and services. Epsilon’s best-in-class identity gives brands a clear, privacy-safe view of their customers, which they can use across our suite of digital media, messaging and loyalty solutions. We process 400+ billion consumer actions each day and hold many patents of proprietary technology, including real-time modeling languages and consumer privacy advancements. Thanks to the work of every employee, Epsilon has been consistently recognized as industry-leading by Forrester, Adweek and the MRC. Positioned at the core of Publicis Groupe, Epsilon is a global company with more than 8,000 employees around the world. Check out a few of these resources to learn more about what makes Epsilon so EPIC:

  • Our Culture: https://www.epsilon.com/us/about-us/our-culture-epsilon
  • Life at Epsilon: https://www.epsilon.com/us/about-us/epic-blog
  • DE&I: https://www.epsilon.com/us/about-us/diversity-equity-inclusion
  • CSR: https://www.epsilon.com/us/about-us/corporate-social-responsibility

Great People Deserve Great Benefits

We know that we have some of the brightest and most talented associates in the world, and we believe in rewarding them accordingly. If you work here, expect competitive pay, comprehensive health coverage, and endless opportunities to advance your career.

Epsilon is an Equal Opportunity Employer.  Epsilon’s policy is not to discriminate against any applicant or employee based on actual or perceived race, age, sex or gender (including pregnancy), marital status, national origin, ancestry, citizenship status, mental or physical disability, religion, creed, color, sexual orientation, gender identity or expression (including transgender status), veteran status, genetic information, or any other characteristic protected by applicable federal, state or local law. Epsilon also prohibits harassment of applicants and employees based on any of these protected categories. Epsilon will provide accommodations to applicants needing accommodations to complete the application process.

#LI-WC1

REF233283T

 

 



Source link

25May

I&F Decision Sci Practitioner Specialist at 3M – Bengaluru, BDC10A


Skill required: Delivery – Search Engine Optimization (SEO)
Designation: Specialist
Qualifications:Any Graduation
Years of Experience:7 – 11 Years
About Accenture
Accenture is a global professional services company with leading capabilities in digital, cloud and security.Combining unmatched experience and specialized skills across more than 40 industries, we offer Strategy and Consulting, Technology and Operations services, and Accenture Song— all powered by the world’s largest network of Advanced Technology and Intelligent Operations centers. Our 699,000 people deliver on the promise of technology and human ingenuity every day, serving clients in more than 120 countries. We embrace the power of change to create value and shared success for our clients, people, shareholders, partners and communities.Visit us at www.accenture.com
What would you do? Data & AI

In Search Engine Optimization, you will be responsible for analyzing a Web site’s search engine optimization effectiveness and develop and prioritize opportunities to improve the sites ranking in online search engines.
What are we looking for? • On & off Page Search Engine Optimization (SEO)
• Technical SEO
• Content Audit & Development
• Ability to work well in a team
• Ability to perform under pressure
• Adaptable and flexible
• Agility for quick learning
• Commitment to quality
Roles and Responsibilities: • In this role you are required to do analysis and solving of moderately complex problems
• May create new solutions, leveraging and, where needed, adapting existing methods and procedures
• The person would require understanding of the strategic direction set by senior management as it relates to team goals
• Primary upward interaction is with direct supervisor
• May interact with peers and/or management levels at a client and/or within Accenture
• Guidance would be provided when determining methods and procedures on new assignments
• Decisions made by you will often impact the team in which they reside
• Individual would manage small teams and/or work efforts (if in an individual contributor role) at a client or within Accenture
• Please note that this role may require you to work in rotational shiftsAny Graduation

Equal Employment Opportunity Statement

All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law.

Job candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.

Accenture is committed to providing veteran employment opportunities to our service men and women.



Source link

25May

How Would The Architecture For An LLM Agent Platform Look? | by Cobus Greyling | May, 2024


The study sees stage 1 as follows:

Agent Recommender will recommend an Agent Item to a user based on personal needs and preferences. Agent Item engages in a dialogue with the user, subsequently providing information for the user and also acquiring user information.

And as I mentioned, the Agent Recommended can be seen as the agent, and the Agent Items as the actions.

This stage can be seen as a multi-tool agent…

Rec4Agentverse then enables the information exchange between Agent Item and Agent Recommender. For example, Agent Item can transmit the latest preferences of the user back to Agent Recommender. Agent Recommender can give new instructions to Agent Item.

Here is the leap where collaboration is supported amongst Agent Items and the agent recommender orchestrating everything.

There is a market for a no-code to low-code IDE for creating agent tools. Agent tools will be required as the capabilities of the agent expands.

The graphic below from the study shows the Agent Items (which I think of as tools)…

The left portion of the diagram shows three roles in their architecture: user, Agent Recommender, and Agent Item, along with their interconnected relationships.

The right side of the diagram shows that an Agent Recommender can collaborate with Agent Items to affect the information flow of users and offer personalised information services.

What I like about this diagram is that it shows the user / agent recommender layer, the information exchange layer and the information carrier layer, or integration.



Source link

25May

Copy This AI-Powered Automated System For Topic Research (No-Code) | by Hasan Aboul Hasan | May, 2024


Perfect, now that we understand how the system works, let’s set it up!

1- Log in to Your Make Account

If you don’t have an account, just sign up and log in.

2- Install the Content Extractor App

It is very simple, click the button below:

Install Make App

You will see this page:

Click “Install,” and you are done!

3- Clone the Google Sheet

As explained before, the system reads and saves data in a Google Sheet. I prepared the sheet to make it easy for you to get started quickly. Just create a clone of it in your Google account.

Clone Google Sheet

4- Create a Datastore

Before we set up the system, you need to create a data store.

So head to “Datastores” from the right menu and create a new one.

Call it: SERP_RESULTS

Create a new data structure, which means adding fields to the table. Add the following:

field 1:

Name: link

Type: Text

field 2:

Name: position

Type: Number

field 3:

Name: Parent Keyword

Type: Text

field 4:

Name: Last Updated

Type: Date

Great! We have our data store. We are ready to create the automated system.

5- Import The System Blueprint

Now, go to “Scenarios,” create a new scenario, then click “Import Blueprint.

Download Scenerio Blueprint

🔴 DON’T FORGET TO EXTRACT THE ZIP FILE FIRST

6- Update the Modules

Now that we have the scenario, the database, and the Google Sheet, we just need to update our app modules to match your accounts.

1- Update the Google Sheets Module Authentication

Please update all the Google Sheet modules and the spreadsheet ID, which can be found in the browser URL.

2- Connect OpenAI Module

Click on the OpenAI App to connect with your account using your API key.

3- Set the Serper API Key

Since we are using the Serper API to get Google organic results, get an API key and set it here.

4- Connect with your Datastore.

Make sure all modules are set up correctly and update the datastore module to match the datastore we created in step 4.

5- Set “Extract Web Content” API Key

To use my app for free, make sure to use this API key: HASAN2024

7- Run a Test

Perfect, we have our system ready!

Let’s give it a try!

If you have any problems, you can join us on the forum; it is free!

I will be there almost every day to help.

If you came to this article from my YouTube video, you know I discussed how these systems can be used to build a business in today’s “AI Era.”

One of the most in-demand products in the digital world today is ready-made systems, or what we call “done-for-you” systems.

Businesses and individuals need plug-and-play systems that help them automate or fix a specific problem.

This system is a great example of a “done-for-you” system that you can sell online. You could also use it as a powerful lead agent.

Now, I’m giving it to you for free, yes. But that doesn’t mean you shouldn’t join my newsletter to get my weekly updates and exclusive tips 😅

Get Weekly Exclusive Tips

Anyway, the idea here is to learn and build such systems. This service will make you stand out in the competition today, as it is still new and not many freelancers know about it.

🟢 As a bonus tip, to make your offering even more unique and provide more value to your customers, and help you turn this into a recurring income business:

You can create custom apps in the workflow you are selling. Like the one I shared with you for free, the “Extract Web Content” app.

Yes, I gave it to you for free, but you can create something similar that makes your system unique and helps get your customers attached to the service you provide.

Do it, and thank me later 😉

How did I build the Make Custom App?

Make allows you to build any custom app you want as soon as you have the API endpoint for it.

So what I did simply was create a very basic API in Python. Here is the code:

from fastapi import APIRouter
from SimplerLLM.tools.generic_loader import load_content
router = APIRouter()

#extract content from blog post
@router.get("/tools/extract-content-from-page",)
async def extract_content_from_web_page(url: str,):
return load_content(url)

I used SimplerLLM, my free Python library.

You can see how easy it is to read content with the built-in functionalities.

And built the app based on that.

👉 You can learn more about APIs and how to build and sell them in this course here.

You can add an API key as I did, and this app can even be sold independently as a custom app for Make that helps build more complex and customized systems!

Other than building custom Make apps, you can extend this system further to provide more functionalities and value to your customers or your own business.

Here are some ideas:

1- Add AI Analysis

It would be a great addition to the system to have AI analyze the final results and suggest tips based on that.

For example, my AI-powered SEO analyzer extracts an SEO report using an API, and then AI analyzes and creates a detailed report based on that data. You can test it here.

Another example is the AI keyword research tool, where I feed the keywords to the AI, and it suggests a content plan and tips based on that data.

You can even go further with an agentic workflow that automates the entire process.

2- Add Keyword Position Tracking

Since we are already extracting organic Google results with the Serper API and saving them in the database with their positions, you can also track the position of specific domains for each target keyword to see if they rank.

3- Add Keyword Metrics

You can also add more metrics to the keywords, such as search volume, keyword difficulty, CPC, and much more. This will enrich the data for AI analysis or your customers.

You can obtain such data from SEO APIs like Spyfu, Semrush, and others.

4- Optimize the Prompts

When you open the OpenAI modules, you will see I added some prompts to extract content ideas from text.

These are not the best or perfect prompts. They do the job, but you can optimize them to get better results.

If you have taken my prompt engineering course, you know that with some prompting techniques, you can optimize the prompts for better results.

Remember, if you have any problems or want to chat, hop into the forum. I’m there almost every day to answer your queries!

Join The Forum

Good Luck!



Source link

Protected by Security by CleanTalk