09Jun

Business Development Consultant – Nordic Market

Job title: Business Development Consultant – Nordic Market

Company: Oracle

Job description: sales. Our Technology Business Development team in Dublin, Ireland will be your place to explore! Our Business Development… Consultants are a team of high-energy, driven professionals who build new business opportunities to fuel our growth in our Mid…

Expected salary:

Location: Dublin

Job date: Thu, 04 Apr 2024 07:05:31 GMT

Apply for the job now!

09Jun

Business Development Consultant – UK market

Job title: Business Development Consultant – UK market

Company: Oracle

Job description: sales. Our Technology Business Development team in Dublin, Ireland will be your place to explore! Our Business Development… Consultants are a team of high-energy, driven professionals who build new business opportunities to fuel our growth in our Mid…

Expected salary:

Location: Dublin

Job date: Thu, 04 Apr 2024 07:54:50 GMT

Apply for the job now!

09Jun

Business Development Consultant – Greek Market

Job title: Business Development Consultant – Greek Market

Company: Oracle

Job description: Job Description: We are looking for a Business Development Consultant for the Greek Market market, who is market… others on new technology. Kick start/boost your career in Tech sales. Our Technology Business Development team in Dublin, Ireland…

Expected salary:

Location: Dublin

Job date: Tue, 09 Apr 2024 01:51:24 GMT

Apply for the job now!

09Jun

How To Get Consistent JSON From Google Gemini (With Practical Example) | by Hasan Aboul Hasan


In this post, I will show you how to generate consistent JSON responses from Google Gemini using Python.

No fluff… a direct, practical solution I created, Tested, and Worked!

Generated with Dalle AI

We want the output to be a Consistent JSON ONLY, so we can rely on this output to build tools and applications!

Let me show you an example of a tool I built: The Hook Generator Tool

This is how it works:

So, I create a prompt that generates Hooks — not any prompt, but a Power prompt based on data. (not our topic for today)

I pass the prompt to the language model, and I get a JSON response. Then, I read the JSON with JavaScript and populate the UI on WordPress.

Here is a sample JSON I got from the LLM for my tool:

[
{
"hook_type": "The Intriguing Question",
"hook": "What’s the most effective way to learn Python through short videos?"
},
{
"hook_type": "Visual Imagery",
"hook": "Imagine a world where Python tutorials are as captivating as short films."
},
{
"hook_type": "Quotation",
"hook": "Albert Einstein once said, 'The only source of knowledge is experience.' Learn Python through engaging short videos and experience the learning journey."
}
]

And based on that, I can build a UI like this:

🔥 If you are interested in learning how to build this AI tool step by step and monetize with the credit system, as I do here on my website with my tools, to turn WordPress into SaaS, you can check out my courses here. 🔥

Anyway, getting back to our problem, did you spot it? 🤔

Yes, it is in JSON. To build tools like this, we must ensure that we get the same JSON response from the language model every time.

If we get a different JSON, it will be impossible to have a consistent UI for the tool because we will not be able to parse and read the response with Javascript or any language you are using. (even with no code)

There are several ways and approaches to solve this issue and achieve consistent JSON response.

Starting with some prompting techniques that force the model to generate a response based on the example output you provide, something like this:

IMPORTANT: The output should be a JSON array of 10 titles without field names. Just the titles! Make Sure the JSON is valid.

Example Output:
[
"Title 1",
"Title 2",
"Title 3",
"Title 4",
"Title 5",
"Title 6",
"Title 7",
"Title 8",
"Title 9",
"Title 10",
]

Another approach is using Function calling with OpenAI Models or Python Instructor Package with Pydantic, which is also limited to OpenAI and relies on Function calling.

I also automated and simplified the process of building AI tools fast in this blog.

To learn more about this problem and suggested solutions, you can check out this blog post I wrote on function chaining.

🟢 But what about a generic approach that works with any model and does not rely solely on a specific model or functionality?

You CAN’T build all your tools and apps relying on one feature or model.

It is better to take a more dynamic approach so that you can switch from model to model at any time without changing your full codes and structure.

With this said, I thought of a way, and I came up with a basic yet powerful approach that got me the results I wanted: consistent JSON response!

Let me show you what I did!

Let’s keep things simple with a real practical example!

Let’s say you want to build a Simple Blog Title Generator Tool, maybe like this one.

Here is what we need:

1- Craft a Prompt that Generate Blog Post Titles.

2- Feed the prompt to Google Gemini or other language models.

3- Get JSON Structured Response 🔴

4- Return the JSON to the UI to build it.

Our main problem in step 3.

Here is my approach to solving this problem:

Step 1: Decide on the JSON structure you want to return.

First, you should know what you want!

Which JSON Structure do you want? So you can ask the model to get it.

For example, in my case, I want something like this:

{
"titles": [
"Title 1",
"Title 2",
"Title 3",
"Title 4",
"Title 5"
]
}

Now, let’s create a Python script and continue to step 2

Step 2: Define the model

The easiest and most efficient way to build tools is to return a class or a Pydantic model that can be read and accessed easily in your code.

So, I created a Pydantic model that fits the JSON response that I want.

class TitlesModel(BaseModel):
titles: List[str]

Step 3: Create the base prompt

Now, let’s create a prompt that generates blog post titles based on a topic. I will keep things simple for this example, let’s say:

base_prompt = f"Generate 5 Titles for a blog post about the following topic: [{topic}]"

Step 4: Convert the Pydantic model into an example JSON String

I created a simple Python function to automate the process of creating an example Text JSON based on the Pydantic model.

We will use this to pass to the LLM in Step 5

Here is the Function:

def model_to_json(model_instance):
"""
Converts a Pydantic model instance to a JSON string.
    Args:
model_instance (YourModel): An instance of your Pydantic model.
Returns:
str: A JSON string representation of the model.
"""
return model_instance.model_dump_json()

Then, we use this Function to generate the string representation of the Pydantic model.

json_model = model_to_json(TitlesModel(titles=['title1', 'title2']))

Step 5: Post-optimize the prompt

Now, I will use prompt engineering techniques to force the model to generate the JSON we want within the response. Here is how I did it:

optimized_prompt = base_prompt + f'.Please provide a response in a structured JSON format that matches the following model: {json_model}'

It is just adding and telling the language model to generate a JSON that matches the JSON_model we generated in step 4.

Step 6: Generate Response with Gemini

Now call Gemini API and generate a response with the optimized_prompt.

I created a simple function that does this so I can use it directly in my code. Here it is:

import google.generativeai as genai
# Configure the GEMINI LLM
genai.configure(api_key='AIzgxb0')
model = genai.GenerativeModel('gemini-pro')
#basic generation
def generate_text(prompt):
response = model.generate_content(prompt)
return response.text

Then, I call from my script this way:

gemeni_response = generate_text(optimized_prompt)

Then we will get something like:

Absolutely! Here's a JSON format representation of 5 engaging blog post titles for a Python programming blog:
JSON
{
"titles": [
"Python Tricks: 5 Hidden Gems You Might Have Missed",
"Mastering Python Data Structures: Level Up Your Coding",
"Debugging Python Code Like a Pro: Strategies and Tools",
"Project Inspiration: Build a Fun Web App with Python",
"Elegant Python: Writing Clean and Readable Code"
]
}

A combination of text and JSON in the response!

But the JSON is constructed the way we want, great!

Step 7: Extract the JSON String

Now, I used regular expressions to extract the JSON string from the output.

Here is the Function I created:

def extract_json(text_response):
# This pattern matches a string that starts with '{' and ends with '}'
pattern = r'\{[^{}]*\}'
    matches = re.finditer(pattern, text_response)
json_objects = []
for match in matches:
json_str = match.group(0)
try:
# Validate if the extracted string is valid JSON
json_obj = json.loads(json_str)
json_objects.append(json_obj)
except json.JSONDecodeError:
# Extend the search for nested structures
extended_json_str = extend_search(text_response, match.span())
try:
json_obj = json.loads(extended_json_str)
json_objects.append(json_obj)
except json.JSONDecodeError:
# Handle cases where the extraction is not valid JSON
continue
if json_objects:
return json_objects
else:
return None # Or handle this case as you prefer
def extend_search(text, span):
# Extend the search to try to capture nested structures
start, end = span
nest_count = 0
for i in range(start, len(text)):
if text[i] == '{':
nest_count += 1
elif text[i] == '}':
nest_count -= 1
if nest_count == 0:
return text[start:i+1]
return text[start:end]

Then I call it:

json_objects = extract_json(gemeni_response)

Now we have the JSON!

Step 8: Validate the JSON

Before using the JSON, I validated it to ensure it matched the Pydantic model I wanted. This allows me to implement a retry mechanism in case of any errors.

Here is the Function I created:

def validate_json_with_model(model_class, json_data):
"""
Validates JSON data against a specified Pydantic model.
    Args:
model_class (BaseModel): The Pydantic model class to validate against.
json_data (dict or list): JSON data to validate. Can be a dict for a single JSON object,
or a list for multiple JSON objects.
Returns:
list: A list of validated JSON objects that match the Pydantic model.
list: A list of errors for JSON objects that do not match the model.
"""
validated_data = []
validation_errors = []
if isinstance(json_data, list):
for item in json_data:
try:
model_instance = model_class(**item)
validated_data.append(model_instance.dict())
except ValidationError as e:
validation_errors.append({"error": str(e), "data": item})
elif isinstance(json_data, dict):
try:
model_instance = model_class(**json_data)
validated_data.append(model_instance.dict())
except ValidationError as e:
validation_errors.append({"error": str(e), "data": json_data})
else:
raise ValueError("Invalid JSON data type. Expected dict or list.")
return validated_data, validation_errors

Here is how I used it in the code:

validated, errors = validate_json_with_model(TitlesModel, json_objects)

Step 9: Play with the Model!

If there are no errors from step 8, we can convert the JSON to Pydantic again and play with it as we like!

Here is the Function that converts JSON back to Pydantic:

def json_to_pydantic(model_class, json_data):
try:
model_instance = model_class(**json_data)
return model_instance
except ValidationError as e:
print("Validation error:", e)
return None

Here is how I used it in my script:

model_object = json_to_pydantic(TitlesModel, json_objects[0])
#play with
for title in model_object.titles:
print(title)

You see, now I can access the titles easily with my code!

Get The Full Code

Instead of going through all the steps every time you want to build a tool or write a script, I added all this as a single function in the SimplerLLM Library!

Here is how you can build the same blog title generator tool with SimplerLLM with a few lines of code:

from pydantic import BaseModel
from typing import List
from SimplerLLM.langauge.llm import LLM, LLMProvider
from SimplerLLM.langauge.llm_addons import generate_basic_pydantic_json_model as gen_json
llm_instance = LLM.create(provider=LLMProvider.GEMINI, model_name="gemini-pro")class Titles(BaseModel):
list: List[str]
topic: str
input_prompt = "Generate 5 catchy blog titles for a post about SEO"json_response = gen_json(model_class=Titles,prompt=input_prompt, llm_instance=llm_instance)

print(json_response.list[0])

All the steps are now compressed into one line:

json_response = gen_json(model_class=Titles,prompt=input_prompt, llm_instance=gemini)

In this way, you can build AI tools way faster and focus on the tool idea, functionality, and prompt instead of dealing with Inconsistent JSONs.

What is more important is that with this approach, you are not restricted to a specific language model. For example, you can change this line:

llm_instance = LLM.create(provider=LLMProvider.GEMINI, model_name="gemini-pro")

To:

llm_instance = LLM.create(provider=LLMProvider.OPENAI, model_name="gpt-4")

And you will be using another Model… It is really like magic. Isn’t it?

I will be more than happy if you share your thoughts and opinions, and maybe your tests if you do some.

I think I deserve some claps 😅



Source link

09Jun

Business Development Consultant – Spanish Market

Job title: Business Development Consultant – Spanish Market

Company: Oracle

Job description: Job Description: We are looking for a Business Development Consultant for the Spanish Market market, who is market… others on new technology. Kick start/boost your career in Tech sales. Our Technology Business Development team in Dublin, Ireland…

Expected salary:

Location: Dublin

Job date: Tue, 09 Apr 2024 04:19:06 GMT

Apply for the job now!

09Jun

Business Development Consultant – Polish Market

Job title: Business Development Consultant – Polish Market

Company: Oracle

Job description: Job Description: We are looking for a Business Development Consultant for the Polish Market market, who is market… others on new technology. Kick start/boost your career in Tech sales. Our Technology Business Development team in Dublin, Ireland…

Expected salary:

Location: Dublin

Job date: Tue, 09 Apr 2024 05:57:19 GMT

Apply for the job now!

09Jun

Data Modeler at Synechron – Richmond, VA


We are

At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron’s progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 14,000+, and has 52 offices in 20 countries within key global markets.

Our challenge

We are seeking a Data modeler specializing in Data Modelling, Erwin, metadata, data management, taxonomy, ontologies, and semantic technologies.

Additional Information* 

The base salary for this position will vary based on geography and other factors. In accordance with law, the base salary for this role if filled within Richmond VA, Charlotte NC, Kennesaw GA, New Jersey NJ, TX is $110k – $120k/year & benefits (see below). 

The Role

Responsibilities:

  • Collaborate with variety of internal teams (e.g., Finance, Risk, Asset Management, etc.) to drive strategic data management priorities through metadata.

  • Manage the collection and governance of business, technical and operational metadata.

  • Drive various other metadata management activities, including but not limited to, lineage discovery, metadata creation, metadata change management and reporting.

  • Develop logical metadata models (e.g., metadata asset model, semantic relationships) to represent business and technical metadata (e.g., reporting hierarchy, physical data elements, logical data elements, data quality controls)

Requirements:

You are:

  • Having strong understanding of business processes within a large financial institution.

  • Familiar with taxonomy, ontologies, and semantic modelling.

  • Have conceptual and logical business data modeling experience across domains (i.e., customer, account, risk, etc.) 

  • Deep understanding and experience with technical, operational, and business metadata. 

  • Ability to derive and communicate value (business case) from metadata management.

  • Demonstrated ability to quickly learn new software, specifically metadata management tooling.

We can offer you:

  • A highly competitive compensation and benefits package

  • A multinational organization with 52 offices in 20 countries and the possibility to work abroad

  • Laptop and a mobile phone

  • 10 days of paid annual leave (plus sick leave and national holidays)

  • Maternity & Paternity leave plans

  • A comprehensive insurance plan including: medical, dental, vision, life insurance, and long-/short-term disability (plans vary by region)

  • Retirement savings plans

  • A higher education certification policy

  • Commuter benefits (varies by region)

  • Extensive training opportunities, focused on skills, substantive knowledge, and personal development.

  • On-demand Udemy for Business for all Synechron employees with free access to more than 5000 curated courses 

  • Coaching opportunities with experienced colleagues from our Financial Innovation Labs (FinLabs) and Center of Excellences (CoE) groups

  • Cutting edge projects at the world’s leading tier-one banks, financial institutions and insurance firms

  • A flat and approachable organization

  • A truly diverse, fun-loving and global work culture

S​YNECHRON’S DIVERSITY & INCLUSION STATEMENT
 

Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference’ is committed to fostering an inclusive culture – promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more.

All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant’s gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.

Candidate Application Notice



Source link

09Jun

Business Data Analyst

Job title: Business Data Analyst

Company: RSA Insurance

Job description: with a presence in North America, the UK, and Europe. Our business has grown organically and through acquisitions to over $21 billion… is a purpose-driven business – our purpose is to ‘help people, businesses, and society prosper in good times, and be resilient in…

Expected salary:

Location: Southside Dublin – Dundrum, Co Dublin

Job date: Wed, 10 Apr 2024 07:18:10 GMT

Apply for the job now!

09Jun

Training Team Leader

Job title: Training Team Leader

Company: Infosys

Job description: Job Description About Us Infosys is a global leader in next-generation digital services and consulting. We enable… from our innovation ecosystem. Infosys is an equal opportunity employer and all qualified applicants will receive consideration…

Expected salary:

Location: Ireland

Job date: Sat, 08 Jun 2024 03:26:49 GMT

Apply for the job now!

08Jun

Business / Operational Audit, AVP

Job title: Business / Operational Audit, AVP

Company: State Street

Job description: the adequacy and effectiveness of controls designed to mitigate key business risks to comply with relevant regulatory… development. You will be based in Ireland supporting our Global Delivery business (fund accounting, transfer agency, depositary…

Expected salary:

Location: Dublin

Job date: Wed, 10 Apr 2024 22:40:40 GMT

Apply for the job now!

Protected by Security by CleanTalk