09Oct

Python Technical Lead (Data& Analytics) at Verisk – Hyderabad, India


Company Description

We help the world see new possibilities and inspire change for better tomorrows. Our analytic solutions bridge content, data, and analytics to help business, people, and society become stronger, more resilient, and sustainable.

Job Description

We are looking for someone with passion to build products & solutions for the Insurance industry using data, analytics, and GenAI. This person has sharp problem-solving skills, eagerness to learn and excellent collaboration and communication skills.

This person has successfully developed data and analytics solutions and understands best practices in data modeling, data engineering and AI Products. To be successful the candidate needs to thrive in a highly collaborative, cross-functional teams’ environment. This position offers excellent future growth opportunities along either technology or management tracks. If you are an experienced python engineer, who enjoys solving hard problems, is a technologist at heart who loves coding as much as building and leading a high performing team, and you are looking to create a real impact in the US Insurance ecosystem – This is the opportunity for you!

Roles and responsibilities:

  • Build end-to-end data and analytics components and solutions, using python and GenAI. This includes architecting, designing the technical solution and leading the team in the implementation.
  • Collaborate with application teams to integrate the solutions with the products and platforms, ensuring alignment with technology strategy and business outcomes.
  • Architect and design reusable services leveraging and integrating frameworks, ensuring optimal performance and compatibility.
  • Perform hands-on coding while prototyping (POC).
  • Responsibility for the overall quality of a system, implement best practices, perform Code Reviews/Code quality checks to promote maintainable and scalable code.
  • Serve as technology point of contact to business partners and application teams. 
  • Provide mentoring and technical guidance to junior programmers and other software engineers. 
  • Troubleshoot and perform root cause analysis.

Qualifications

  • 6 -12  years of proven very good experience and proficiency with python.
  • Proficient experience and understanding in the following technologies are a must: Python and associated libraries, RESTful API development,
  • Good to have Cloud platforms (AWS), Databases, System design, Building data pipelines, Git, CI/CD, Linux is plus.
  • Familiarity or hands-on experience with AI and ML concepts, prompt engineering techniques to optimize GenAI performance, evaluation and selection of appropriate models, frameworks, techniques for GenAI use cases, frameworks such as Lang chain or llama Index is highly desirable.
  • Experienced in design and development of components and systems from ground up using engineering best practices and design patterns.
  • Ability to learn and adapt to continuously changing technology.
  • Excellent understanding of object-oriented design concepts and software development processes and methods.
  • Experienced at leading teams, interacting with business partners or customers and guiding project direction.
  • Superior organization skills, skilled at recognizing priorities and keeping the team focused on the most important features.
  • Leadership and ability to guide technical and design working sessions.
  • Demonstrated ability to work independently with minimal supervision.

#LI-NK1
#LI-Hybrid

Additional Information

For over 50 years, Verisk has been the leading data analytics and technology partner to the global insurance industry by delivering value to our clients through expertise and scale. We empower communities and businesses to make better decisions on risk, faster.

At Verisk, you’ll have the chance to use your voice and build a rewarding career that’s as unique as you are, with work flexibility and the support, coaching, and training you need to succeed. 

For the eighth consecutive year, Verisk is proudly recognized as a Great Place to Work® for outstanding workplace culture in the US, fourth consecutive year in the UK, Spain, and India, and second consecutive year in Poland.  We value learning, caring and results and make inclusivity and diversity a top priority.  In addition to our Great Place to Work® Certification, we’ve been recognized by The Wall Street Journal as one of the Best-Managed Companies and by Forbes as a World’s Best Employer and Best Employer for Women, testaments to the value we place on workplace culture.

We’re 7,000 people strong.  We relentlessly and ethically pursue innovation. And we are looking for people like you to help us translate big data into big ideas. Join us and create an exceptional experience for yourself and a better tomorrow for future generations.

Verisk Businesses

Underwriting Solutions — provides underwriting and rating solutions for auto and property, general liability, and excess and surplus to assess and price risk with speed and precision

Claims Solutions — supports end-to-end claims handling with analytic and automation tools that streamline workflow, improve claims management, and support better customer experiences

Property Estimating Solutions — offers property estimation software and tools for professionals in estimating all phases of building and repair to make day-to-day workflows the most efficient

Extreme Event Solutions — provides risk modeling solutions to help individuals, businesses, and society become more resilient to extreme events.

Specialty Business Solutions — provides an integrated suite of software for full end-to-end management of insurance and reinsurance business, helping companies manage their businesses through efficiency, flexibility, and data governance

Marketing Solutions — delivers data and insights to improve the reach, timing, relevance, and compliance of every consumer engagement

Life Insurance Solutions – offers end-to-end, data insight-driven core capabilities for carriers, distribution, and direct customers across the entire policy lifecycle of life and annuities for both individual and group.

Verisk Maplecroft — provides intelligence on sustainability, resilience, and ESG, helping people, business, and societies become stronger

Verisk Analytics is an equal opportunity employer.

All members of the Verisk Analytics family of companies are equal opportunity employers. We consider all qualified applicants for employment without regard to race, religion, color, national origin, citizenship, sex, gender identity and/or expression, sexual orientation, veteran’s status, age or disability. Verisk’s minimum hiring age is 18 except in countries with a higher age limit subject to applicable law.

https://www.verisk.com/company/careers/

Unsolicited resumes sent to Verisk, including unsolicited resumes sent to a Verisk business mailing address, fax machine or email address, or directly to Verisk employees, will be considered Verisk property. Verisk will NOT pay a fee for any placement resulting from the receipt of an unsolicited resume.



Source link

09Oct

Product Manager – Data at Globalization Partners – United States (Remote-First)


Job Title: Product Manager – Data (Remote within US)

 

At G-P, our mission is to break down barriers to global business, enabling opportunities for everyone, everywhere. With remote-first and diverse teams all around the world, our people are key to achieving this mission. That’s why we trust our Dream Team members with the flexibility and autonomy to do their best and most innovative work, encourage and support their personal growth and career development, and believe in recognition for a job well done.  

Our industry-leading SaaS-based Global Employment Platform™ enables our customers to expand and grow into 180+ countries, creating more opportunities for global success – without requiring entity or subsidiary setup. The technical opportunities you’ll experience here have a positive impact on people and their work/life possibilities around the world. Beyond the power of our platform, we never forget that behind every hire is a human being. And that brings us to you.  

If you are passionate about full stack development and enjoy balancing frontend, backend, and infrastructure work, consider G-P. Here, your expertise will help design and deliver high-performing cloud-based software products, contributing to solving complex global business challenges. With a fast-moving startup environment, we thrive on innovation and expect the same from our team members, offering a dynamic space where your best work can take flight. 

Beyond a competitive compensation and benefits package, what we offer to all employees along the way is the clear and simple promise of *Opportunity Made Possible*. Come expand your skills and take part in building scalable, production-grade solutions in an environment that values creativity and impact.

 

 

Company Overview:

At G-P, we integrate artificial intelligence and machine learning into transformative products that redefine the future of work globally. Our AI team’s mission is to harness cutting-edge technology to drive innovation and achieve breakthrough business outcomes for our Employer of Record (EOR) and Global Growth businesses. We are expanding our team with a visionary Principal Product Manager to lead our ambitious AI and ML projects, building on the success of our innovative G-P Meridian Suite™.

The Role:

We are seeking an experienced Product Manager, Data to lead the development of our data ingestion, analysis, and reporting products, with a special focus on preparing data for AI and Machine Learning (ML) applications. As part of an outcome-focused Product Management organization, you will play a pivotal role in shaping our data strategy and enhancing our platform’s capabilities to support advanced analytics and intelligent solutions.

The ideal candidate has a strong background in data management, is proficient with Lakehouse architectures (especially Databricks), and possesses robust agile Product Management skills using tools like Jira and Confluence.

Key Responsibilities:

  • Define and execute the product roadmap for data ingestion, analytics solutions, and AI/ML data preparation with clear outcome-oriented goals.
  • Utilize agile methodologies to manage the product development lifecycle, employing tools such as Jira and Confluence for planning and documentation.
  • Partner with Architecture and Engineering team to manage the design and implementation of scalable data pipelines and architectures using Lakehouse principles to support AI and ML initiatives.
  • Lead efforts to ensure data quality, feature engineering, and dataset preparation for AI and machine learning models.
  • Leverage Databricks in conjunction with Engineering teams to optimize data processing, analytics, and facilitate machine learning workflows.
  • Work closely with engineering, data science, and business teams to deliver impactful data products and AI/ML solutions.
  • Stay updated on industry trends in data analytics and AI/ML to ensure our data solutions remain cutting-edge.
  • Gather and analyze user feedback to drive continuous product improvement, particularly in areas impacting AI and ML capabilities.
  • Create detailed product requirements, user stories, and maintain an organized product backlog using Jira and Confluence.

Ideal Candidate:

  • Bachelor’s degree in Computer Science, Engineering, Data Science, or a related field.
  • 5+ years of product management experience focusing on data products, with at least 2 years involving AI/ML data preparation.
  • Proficiency with Lakehouse architectures and Databricks.
  • Strong understanding of data ingestion, ETL processes, data warehousing, and data preparation for AI/ML.
  • Experience with products leveraging big data technologies (e.g., Spark, Hadoop) and cloud platforms (AWS, Azure, GCP).
  • Familiarity with machine learning frameworks and tools is a plus.
  • Technical background in Data Science, Computer Science, Engineering, or a related field is advantageous but not mandatory.
  • Proven experience managing products in an agile environment.
  • Proficiency with agile tools such as Jira and Confluence.
  • Ability to create user stories, manage sprints, and coordinate with cross-functional teams to achieve outcome-focused goals.
  • Excellent analytical and problem-solving abilities.
  • Strong communication and interpersonal skills.
  • Outcome-driven mindset with the ability to align teams towards common objectives.

Why G-P?

Join G-P and contribute to a team that leads with the first fully customizable suite of global employment products—G-P Meridian Suite™. Your work will enable businesses to manage global teams efficiently, creating and sustaining employment opportunities for workers in over 180 countries with our robust compliance framework and deep industry insights. At G-P, we are outcome-focused, with a 96% customer satisfaction rate and a commitment to ongoing innovation and ethical practices.

What We Offer:

  • Key product role in the Data and AI domain, influencing significant projects that enhance career opportunities globally.
  • A vibrant remote-first culture that promotes creativity, innovation, and values each member’s contribution to our collective goals.
  • Competitive salary, transparent pay structure, and equity options.
  • Comprehensive benefits package including health, dental, and vision insurance, wellness programs, and flexible working conditions.
  • Extensive career development opportunities, including mentorship, in-house training, and professional growth.

Benefits

G-P values its employees and offers excellent benefits and perks including generous paid parental leave, flexible time off, flexible spending accounts, medical Insurance, dental insurance, vision insurance, sabbatical after 5 years of service and more.

 

The annual gross base salary range for this position is $110,000 to $130,000. Actual compensation for this position may vary and will depend on multiple factors including relevant qualifications, experience, education and geographic location. This position is also eligible for an annual bonus dependent on various factors, including and without limitation, individual and company performance in addition to base salary.

 

 

We will consider for employment all qualified applicants, including those with arrest records, conviction records, or other criminal histories, in a manner consistent with the requirements of any applicable state and local laws, including the City of Los Angeles’ Fair Chance Initiative for Hiring Ordinance, the San Francisco Fair Chance Ordinance, and the New York City Fair Chance Act. 

 

 

#LI-Remote  #LI-EL1

 

About Us

G-P helps growing companies unlock their full potential by making it possible to build highly skilled global teams in days instead of months. Through our SaaS-based platform, we help find, hire, onboard, pay, and manage team members, quickly and compliantly, to expand growth opportunities for everyone, everywhere – without the hassle of setting up local subsidiaries or branch offices.

G-P. Global Made Possible.

G-P is a proud Equal Opportunity Employer, and we are committed to building and maintaining a diverse, equitable and inclusive culture that celebrates authenticity. We prohibit discrimination and harassment against employees or applicants on the basis of race, color, creed, religion, national origin, ancestry, citizenship status, age, sex or gender (including pregnancy, childbirth, and pregnancy-related conditions), gender identity or expression (including transgender status), sexual orientation, marital status, military service and veteran status, physical or mental disability, genetic information, or any other legally protected status.

G-P also is committed to providing reasonable accommodations to individuals with disabilities. If you need an accommodation due to a disability during the interview process, please contact us at

ca*****@g-*.com











.



Source link

09Oct

Implementing Sequential Algorithms on TPU | by Chaim Rand | Oct, 2024


Accelerating AI/ML Model Training with Custom Operators — Part 3.A

Photo by Bernd Dittrich on Unsplash

This is a direct sequel to a previous post on the topic of implementing custom TPU operations with Pallas. Of particular interest are custom kernels that leverage the unique properties of the TPU architecture in a manner that optimizes runtime performance. In this post, we will attempt to demonstrate this opportunity by applying the power of Pallas to the challenge of running sequential algorithms that are interspersed within a predominantly parallelizable deep learning (DL) workload.

We will focus on Non Maximum Suppression (NMS) of bounding-box proposals as a representative algorithm, and explore ways to optimize its implementation. An important component of computer vision (CV) object detection solutions (e.g., Mask RCNN), NMS is commonly used to filter out overlapping bounding boxes, keeping only the “best” ones. NMS receives a list of bounding box proposals, an associated list of scores, and an IOU threshold, and proceeds to greedily and iteratively choose the remaining box with the highest score and disqualify all other boxes with which it has an IOU that exceeds the given threshold. The fact that the box chosen at the n-th iteration depends on the preceding n-1 steps of the algorithm dictates the sequential nature of its implementation. Please see here and/or here for more on the rational behind NMS and its implementation. Although we have chosen to focus on one specific algorithm, most of our discussion should carry over to other sequential algorithms.

Offloading Sequential Algorithms to CPU

The presence of a sequential algorithm within a predominantly parallelizable ML model (e.g., Mask R-CNN) presents an interesting challenge. While GPUs, commonly used for such workloads, excel at executing parallel operations like matrix multiplication, they can significantly underperform compared to CPUs when handling sequential algorithms. This often leads to computation graphs that include crossovers between the GPU and CPU, where the GPU handles the parallel operations and the CPU handles the sequential ones. NMS is a prime example of a sequential algorithm that is commonly offloaded onto the CPU. In fact, a close analysis of torchvision’s “CUDA” implementation of NMS, reveals that even it runs a significant portion of the algorithm on CPU.

Although offloading sequential operations to the CPU may lead to improved runtime performance, there are several potential drawbacks to consider:

  1. Cross-device execution between the CPU and GPU usually requires multiple points of synchronization between the devices which commonly results in idle time on the GPU while it waits for the CPU to complete its tasks. Given that the GPU is typically the most expensive component of the training platform our goal is to minimize such idle time.
  2. In standard ML workflows, the CPU is responsible for preparing and feeding data to the model, which resides on the GPU. If the data input pipeline involves compute-intensive processing, this can strain the CPU, leading to “input starvation” on the GPU. In such scenarios, offloading portions of the model’s computation to the CPU could further exacerbate this issue.

To avoid these drawbacks you could consider alternative approaches, such as replacing the sequential algorithm with a comparable alternative (e.g., the one suggested here), settling for a slow/suboptimal GPU implementation of the sequential algorithm, or running the workload on CPU — each of which come with there own potential trade-offs.

Sequential Algorithms on TPU

This is where the unique architecture of the TPU could present an opportunity. Contrary to GPUs, TPUs are sequential processors. While their ability to run highly vectorized operations makes them competitive with GPUs when running parallelizable operations such as matrix multiplication, their sequential nature could make them uniquely suited for running ML workloads that include a mix of both sequential and parallel components. Armed with the Pallas extension to JAX, our newfound TPU kernel creation tool, we will evaluate this opportunity by implementing and evaluating a custom implementation of NMS for TPU.

Disclaimers

The NMS implementations we will share below are intended for demonstrative purposes only. We have not made any significant effort to optimize them or to verify their robustness, durability, or accuracy. Please keep in mind that, as of the time of this writing, Pallas is an experimental feature — still under active development. The code we share (based on JAX version 0.4.32) may become outdated by the time you read this. Be sure to refer to the most up-to-date APIs and resources available for your Pallas development. Please do not view our mention of any algorithm, library, or API as an endorsement for their use.

We begin with a simple implementation of NMS in numpy that will serve as a baseline for performance comparison:

import numpy as np

def nms_cpu(boxes, scores, max_output_size, threshold=0.1):
epsilon = 1e-5

# Convert bounding boxes and scores to numpy
boxes = np.array(boxes)
scores = np.array(scores)

# coordinates of bounding boxes
start_x = boxes[:, 0]
start_y = boxes[:, 1]
end_x = boxes[:, 2]
end_y = boxes[:, 3]

# Compute areas of bounding boxes
areas = (end_x - start_x) * (end_y - start_y)

# Sort by confidence score of bounding boxes
order = np.argsort(scores)

# Picked bounding boxes
picked_boxes = []

# Iterate over bounding boxes
while order.size > 0 and len(picked_boxes)

# The index of the remaining box with the highest score
index = order[-1]

# Pick the bounding box with largest confidence score
picked_boxes.append(index.item())

# Compute coordinates of intersection
x1 = np.maximum(start_x[index], start_x[order[:-1]])
x2 = np.minimum(end_x[index], end_x[order[:-1]])
y1 = np.maximum(start_y[index], start_y[order[:-1]])
y2 = np.minimum(end_y[index], end_y[order[:-1]])

# Compute areas of intersection and union
w = np.maximum(x2 - x1, 0.0)
h = np.maximum(y2 - y1, 0.0)

intersection = w * h
union = areas[index] + areas[order[:-1]] - intersection

# Compute the ratio between intersection and union
ratio = intersection / np.clip(union, min=epsilon)

# discard boxes above overlap threshold
keep = np.where(ratio order = order[keep]

return picked_boxes

To evaluate the performance of our NMS function, we generate a batch of random boxes and scores (as JAX tensors) and run the script on a Google Cloud TPU v5e system using the same environment and same benchmarking utility as in our previous post. For this experiment, we specify the CPU as the JAX default device:

import jax
from jax import random
import jax.numpy as jnp

def generate_random_boxes(run_on_cpu = False):
if run_on_cpu:
jax.config.update('jax_default_device', jax.devices('cpu')[0])
else:
jax.config.update('jax_default_device', jax.devices('tpu')[0])

n_boxes = 1024
img_size = 1024

k1, k2, k3 = random.split(random.key(0), 3)

# Randomly generate box sizes and positions
box_sizes = random.randint(k1,
shape=(n_boxes, 2),
minval=1,
maxval=img_size)
top_left = random.randint(k2,
shape=(n_boxes, 2),
minval=0,
maxval=img_size - 1)
bottom_right = jnp.clip(top_left + box_sizes, 0, img_size - 1)

# Concatenate top-left and bottom-right coordinates
rand_boxes = jnp.concatenate((top_left, bottom_right),
axis=1).astype(jnp.bfloat16)
rand_scores = jax.random.uniform(k3,
shape=(n_boxes,),
minval=0.0,
maxval=1.0)

return rand_boxes, rand_scores

rand_boxes, rand_scores = generate_random_boxes(run_on_cpu=True)

time = benchmark(nms_cpu)(rand_boxes, rand_scores, max_output_size=128)
print(f'nms_cpu: {time}')

The resultant average runtime is 2.99 milliseconds. Note the assumption that the input and output tensors reside on the CPU. If they are on the TPU, then the time to copy them between the devices should also be taken into consideration.

If our NMS function is a component within a larger computation graph running on the TPU, we might prefer a TPU-compatible implementation to avoid the drawbacks of cross-device execution. The code block below contains a JAX implementation of NMS specifically designed to enable acceleration via JIT compilation. Denoting the number of boxes by N, we begin by calculating the IOU between each of the N(N-1) pairs of boxes and preparing an NxN boolean tensor (mask_threshold) where the (i,j)-th entry indicates whether the IOU between boxes i and j exceed the predefined threshold.

To simplify the iterative selection of boxes, we create a copy of the mask tensor (mask_threshold2) where the diagonal elements are zeroed to prevent a box from suppressing itself. We further define two score-tracking tensors: out_scores, which retains the scores of the chosen boxes (and zeros the scores of the eliminated ones), and remaining_scores, which maintains the scores of the boxes still being considered. We then use the jax.lax.while_loop function to iteratively choose boxes while updating the out_scores and remaining_scores tensors. Note that the format of the output of this function differs from the previous function and may need to be adjusted to fit into subsequent steps of the computation graph.

import functools

# Given N boxes, calculates mask_threshold an NxN boolean mask
# where the (i,j) entry indicates whether the IOU of boxes i and j
# exceed the threshold. Returns mask_threshold, mask_threshold2
# which is equivalent to mask_threshold with zero diagonal and
# the scores modified so that all values are greater than 0
def init_tensors(boxes, scores, threshold=0.1):
epsilon = 1e-5

# Extract left, top, right, bottom coordinates
left = boxes[:, 0]
top = boxes[:, 1]
right = boxes[:, 2]
bottom = boxes[:, 3]

# Compute areas of boxes
areas = (right - left) * (bottom - top)

# Calculate intersection points
inter_l = jnp.maximum(left[None, :], left[:, None])
inter_t = jnp.maximum(top[None, :], top[:, None])
inter_r = jnp.minimum(right[None, :], right[:, None])
inter_b = jnp.minimum(bottom[None, :], bottom[:, None])

# Width, height, and area of the intersection
inter_w = jnp.clip(inter_r - inter_l, 0)
inter_h = jnp.clip(inter_b - inter_t, 0)
inter_area = inter_w * inter_h

# Union of the areas
union = areas[None, :] + areas[:, None] - inter_area

# IoU calculation
iou = inter_area / jnp.clip(union, epsilon)

# Shift scores to be greater than zero
out_scores = scores - jnp.min(scores) + epsilon

# Create mask based on IoU threshold
mask_threshold = iou > threshold

# Create mask excluding diagonal (i.e., self IoU is ignored)
mask_threshold2 = mask_threshold * (1-jnp.eye(mask_threshold.shape[0],
dtype=mask_threshold.dtype))

return mask_threshold, mask_threshold2, out_scores

@functools.partial(jax.jit, static_argnames=['max_output_size', 'threshold'])
def nms_jax(boxes, scores, max_output_size, threshold=0.1):
# initialize mask and score tensors
mask_threshold, mask_threshold2, out_scores = init_tensors(boxes,
scores,
threshold)

# The out_scores tensor will retain the scores of the chosen boxes
# and zero the scores of the eliminated ones
# remaining_scores will maintain non-zero scores for boxes that
# have not been chosen or eliminated
remaining_scores = out_scores.copy()

def choose_box(state):
i, remaining_scores, out_scores = state
# choose index of box with highest score from remaining scores
index = jnp.argmax(remaining_scores)
# check validity of chosen box
valid = remaining_scores[index] > 0
# If valid, zero all scores with IOU greater than threshold
# (including the chosen index)
remaining_scores = jnp.where(mask_threshold[index] *valid,
0,
remaining_scores)
# zero the scores of the eliminated tensors (not including
# the chosen index)
out_scores = jnp.where(mask_threshold2[index]*valid,
0,
out_scores)

i = i + 1
return i, remaining_scores, out_scores

def cond_fun(state):
i, _, _ = state
return (i

i = 0
state = (i, remaining_scores, out_scores)

_, _, out_scores = jax.lax.while_loop(cond_fun, choose_box, state)

# Output the resultant scores. To extract the chosen boxes,
# Take the max_output_size highest scores:
# min = jnp.minimum(jnp.count_nonzero(scores), max_output_size)
# indexes = jnp.argsort(out_scores, descending=True)[:min]
return out_scores

# nms_jax can be run on either the CPU the TPU
rand_boxes, rand_scores = generate_random_boxes(run_on_cpu=True)

time = benchmark(nms_jax)(rand_boxes, rand_scores, max_output_size=128)
print(f'nms_jax on CPU: {time}')

rand_boxes, rand_scores = generate_random_boxes(run_on_cpu=False)

time = benchmark(nms_jax)(rand_boxes, rand_scores, max_output_size=128)
print(f'nms_jax on TPU: {time}')

The runtimes of this implementation of NMS are 1.231 and 0.416 milliseconds on CPU and TPU, respectively.

We now present a custom implementation of NMS in which we explicitly leverage the fact that on TPUs Pallas kernels are executed in a sequential manner. Our implementation uses two boolean matrix masks and two score-keeping tensors, similar to the approach in our previous function.

We define a kernel function, choose_box, responsible for selecting the next box and updating the score-keeping tensors, which are maintained in scratch memory. We invoke the kernel across a one-dimensional grid where the number of steps (i.e., the grid-size) is determined by the max_output_size parameter.

Note that due to some limitations (as of the time of this writing) on the operations supported by Pallas, some acrobatics are required to implement both the “argmax” function and the validity check for the selected boxes. For the sake of brevity, we omit the technical details and refer the interested reader to the comments in the code below.

from jax.experimental import pallas as pl
from jax.experimental.pallas import tpu as pltpu

# argmax helper function
def pallas_argmax(scores, n_boxes):
# we assume that the index of each box is stored in the
# least significant bits of the score (see below)
idx = jnp.max(scores.astype(float)).astype(int) % n_boxes
return idx

# Pallas kernel definition
def choose_box(scores, thresh_mask1, thresh_mask2, ret_scores,
scores_scratch, remaining_scores_scratch, *, nsteps, n_boxes):
# initialize scratch memory on first step
@pl.when(pl.program_id(0) == 0)
def _():
scores_scratch[...] = scores[...]
remaining_scores_scratch[...] = scores[...]

remaining_scores = remaining_scores_scratch[...]

# choose box
idx = pallas_argmax(remaining_scores, n_boxes)

# we use any to verfiy validity of the chosen box due
# to limitations on indexing in pallas
valid = (remaining_scores>0).any()

# updating score tensors
remaining_scores_scratch[...] = jnp.where(thresh_mask1[idx,...]*valid,
0,
remaining_scores)
scores_scratch[...] = jnp.where(thresh_mask2[idx,...]*valid,
0,
scores_scratch[...])

# set return value on final step
@pl.when(pl.program_id(0) == nsteps - 1)
def _():
ret_scores[...] = scores_scratch[...]

@functools.partial(jax.jit, static_argnames=['max_output_size', 'threshold'])
def nms_pallas(boxes, scores, max_output_size, threshold=0.1):
n_boxes = scores.size
mask_threshold, mask_threshold2, scores = init_tensors(boxes,
scores,
threshold)

# In order to work around the Pallas argsort limitation
# we create a new scores tensor with the same ordering of
# the input scores tensor in which the index of each score
# in the ordering is encoded in the least significant bits
sorted = jnp.argsort(scores, descending=True)

# descending integers: n_boxes-1, ..., 2, 1, 0
descending = jnp.flip(jnp.arange(n_boxes))

# new scores in descending with the least significant
# bits carrying the argsort of the input scores
ordered_scores = n_boxes * descending + sorted

# new scores with same ordering as input scores
scores = jnp.empty_like(ordered_scores
).at[sorted].set(ordered_scores)

grid = (max_output_size,)
return pl.pallas_call(
functools.partial(choose_box,
nsteps=max_output_size,
n_boxes=n_boxes),
grid_spec=pltpu.PrefetchScalarGridSpec(
num_scalar_prefetch=0,
in_specs=[
pl.BlockSpec(block_shape=(n_boxes,)),
pl.BlockSpec(block_shape=(n_boxes, n_boxes)),
pl.BlockSpec(block_shape=(n_boxes, n_boxes)),
],
out_specs=pl.BlockSpec(block_shape=(n_boxes,)),
scratch_shapes=[pltpu.VMEM((n_boxes,), scores.dtype),
pltpu.VMEM((n_boxes,), scores.dtype)],
grid=grid,
),
out_shape=jax.ShapeDtypeStruct((n_boxes,), scores.dtype),
compiler_params=dict(mosaic=dict(
dimension_semantics=("arbitrary",)))
)(scores, mask_threshold, mask_threshold2)

rand_boxes, rand_scores = generate_random_boxes(run_on_cpu=False)

time = benchmark(nms_pallas)(rand_boxes, rand_scores, max_output_size=128)
print(f'nms_pallas: {time}')

The average runtime of our custom NMS operator is 0.139 milliseconds, making it roughly three times faster than our JAX-native implementation. This result highlights the potential of tailoring the implementation of sequential algorithms to the unique properties of the TPU architecture.

Note that in our Pallas kernel implementation, we load the full input tensors into TPU VMEM memory. Given the limited the capacity of VMEM, scaling up the input size (i.e., increase the number of bounding boxes) will likely lead to memory issues. Typically, such limitations can be addressed by chunking the inputs with BlockSpecs. Unfortunately, applying this approach would break the current NMS implementation. Implementing NMS across input chunks would require a different design, which is beyond the scope of this post.

The results of our experiments are summarized in the table below:

Results of NMS experiments (lower is better) — by Author

These results demonstrate the potential for running full ML computation graphs on TPU, even when they include sequential components. The performance improvement demonstrated by our Pallas NMS operator, in particular, highlights the opportunity of customizing kernels in a way that leverages the TPUs strengths.

In our previous post we learned of the opportunity for building custom TPU operators using the Pallas extension for JAX. Maximizing this opportunity requires tailoring the kernel implementations to the specific properties of the TPU architecture. In this post, we focused on the sequential nature of the TPU processor and its use in optimizing a custom NMS kernel. While scaling the solution to support an unrestricted number of bounding boxes would require further work, the core principles we have discussed remain applicable.

Still in the experimental phase of its development, there remain some limitations in Pallas that may require creative workarounds. But the strength and potential are clearly evident and we anticipate that they will only increase as the framework matures.



Source link

09Oct

Senior Scientist I, Media Development at Evotec – Redmond


Senior Scientist, Media Development

Just – Evotec Biologics’ Media Development group is seeking an emerging leader who is passionate about expanding worldwide access to biotherapeutics through the advancement of continuous bioprocessing technology and mammalian cell culture processes. This person will join a fast-paced, collaborative team to: 1) optimize media formulations at bench-scale for the advancement of low-cost biotherapeutics manufacturing technology, and 2) expand media development capabilities to Just-Evotec Biologics’ second site in Redmond WA. The position requires strong laboratory skills as well as a deep understanding of mammalian cell culture and the production of therapeutic proteins. A proven track record of technical skills in this area, as well as experience with laboratory set up is required. Strong written and verbal communication skills, including the ability to communicate effectively over teleconference and web-based meetings are necessary. The ideal candidate is curious, creative, engaged, and constantly looking for ways to advance and intensify scientific processes, streamline workflows, and improve technology.

Responsibilities

  • Lead media optimization efforts that achieve titer, cell viability, and product quality targets throughout long-duration upstream processes
  • Establish a media development laboratory, integrating high throughput and automation protocols into existing equipment
  • Serve as a technical resource and represent the Media function on early- and late-stage cross-functional development teams
  • Transfer novel media formulations to Manufacturing, support large-scale GMP media preparation protocols and deviations
  • Author CMC regulatory sections as required
  • Drive advancement of the Just – Evotec upstream platform and incorporate novel technologies to characterize and model cell cultures

Qualifications:

  • BS in Cell Biology, Engineering, Biochemistry, or related field and 7+ years industry experience or PhD with some relevant post-doctoral experience
  • Subject Matter Expert (SME) level understanding of cell biology, cell culture media development, and related analytical equipment. Perfusion technology experience is preferred.
  • Experience with DOE and statistical analysis, experience analyzing data using tools such as JMP, R, or Python. Experience with training machine learning models for optimizing high dimensional data sets preferred.
  • Experience with high throughput cell culture (e.g. deep-well plate assays, robotics systems such as Ambr)
  • Enthusiastic problem-solver
  • Management experience preferred
  • Strong written and verbal technical communication skills
  • Self-motivated, able to make and follow through on goals
  • Action-oriented, flexible, and eager to learn and grow skill sets
  • Able to stay organized and relaxed in complex situations, keep track of multiple tasks and data
  • Experience with Microsoft Office apps (Word, PowerPoint, Excel)
  • Experience with commercial process development, process characterization, and validation preferred
  • Available to travel to Seattle for training as needed
  • Weekend availability for experimental execution as necessary
  • Ability to lift up to 50 lbs

The base pay range for this position at commencement of employment is expected to be $120,000 to $150,000;  Base salary offered may vary depending on individual’s skills, experience and competitive market value. Additional total rewards include discretionary annual bonus, comprehensive benefits to include Medical, Dental and Vision, short-term and long-term disability, company paid basic life insurance, 401k company match, flexible work, generous paid time off and paid holiday, wellness and transportation benefits.

Evotec (US) Inc. is an Equal Opportunity Employer.  All qualified applicants will receive consideration for employment without regard to race, gender, age, disability, genetic information, gender expression, gender identity, national origin, religion, sexual orientation, or veteran status.



Source link

08Oct

Senior DevOps Engineer at NVIDIA – US, CA, Santa Clara


NVIDIA is seeking a passionate, motivated and technical Kubernetes Architect/Engineer to join its multifaceted and fast-paced Infrastructure, Planning and Processes organization where you will be working as a Principal DevOps & SRE Engineer to support the design and implementation of Kubernetes solutions for the company’s Cloud Platform.

The position will be part of a fast-paced crew that develops and maintains sophisticated build & test environments for a multitude of hardware platforms both NVIDIA GPUs and Tegra Processors along with various operating systems (Windows/Linux/Android). The team works with various other business units within NVIDIA Software such as Graphics Processors, Mobile Processors, Deep Learning, Artificial Intelligence, Robotics and Autonomous cars to cater to their infrastructure & system’s needs.

What you’ll be doing:

  • Architect, design, implement & maintain Kubernetes environments from planning to production/deployment to support CI/CD pipeline for Gitlab & Jenkins.

  • Design solutions with service discovery, networking, monitoring, logging, scheduling in Kubernetes.

  • Play a critical role in ensuring that our platform is easy to use, reliable, scalable and resistant to disruptions.

  • You will ensure that the platform enables developers to deliver value while exceeding customer needs for stability and security.

  • Actively participate in product workshops, roadmap and design sessions. Lead technical demos, whiteboards and working sessions.

  • Defend the proposed architectural design in front of the DevSecOps review board (security, networking, infrastructure, dev, ops).

  • Develop automations to improve efficiency & productivity. Participating in on-call support and critical issue coverage as a SRE engineer.

  • Take part in prototyping, crafting and developing cloud infrastructure for Nvidia.

What we need to see:

  • Kubernetes domain expertise with extensive experience building scalable, resilient platforms in both public and private cloud capable of providing platform engineering / architecture standard methodologies (including experience with architecting and implementing the overall platform, orchestration, security, and monitoring ecosystem)

  • High proficiency in administering and configuring Kubernetes.

  • Programming background in python and/or similar scripting languages.

  • Experience of maintaining cloud infrastructure and highly available production environment.

  • Demonstrating the ability to automate processes using Continuous Integration /Continuous Delivery (CI/CD) tools. Proficient in using Configuration as Code, infrastructure-as-code tools such as ansible, puppet, chef & terraform. Strong background with Gitlab, Jenkins and/or other CI/CD systems & Artifactory.

  • Experience in Databases both SQL (MySQL) and NoSQL (Elastic Search /MongoDB/Cassandra).

  • Experienced with customer management/onboarding, data analytics/visualization & monitoring tools like Kibana, Grafana, Splunk, Zabbix, Prometheus and/or similar systems etc.

  • 8+ years of proven experience

  • Bachelor’s or Master’s degree in computer science, Software Engineering, or equivalent experience.

Ways to stand out from the crowd:

  • Solid understanding of containerization and microservices architecture. Certified Kubernetes Administrator (CKA), Certified Kubernetes Security Specialist (CKS) & Certified Kubernetes Application Developer (CKAD) preferred.

  • Thrives in a multi-tasking environment with constantly evolving priorities.

  • Ability to analyze complex problems into simple sub problems and then reuse available solutions to implement most of those. Ability to design simple systems that can work efficiently without needing much support.

  • Prior experience with large scale operations team. Experience with using and improving data centers. Background with computer algorithms and ability to choose the best possible algorithms to meet the scaling challenge.

With competitive salaries and a generous benefits package, we are widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us and, due to outstanding growth, our exclusive engineering teams are rapidly growing. If you’re a creative and autonomous engineer with a real passion for technology, we want to hear from you.

The base salary range is 164,000 USD – 327,750 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.

You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.



Source link

08Oct

Talent Systems Specialist | GovAI Blog


Our first research agenda, published in 2018, helped define and shape the nascent field of AI governance. Our team and affiliate community possess expertise in a wide variety of domains, including compute governance, US-China relations, arms race dynamics, EU policy, and AI progress forecasting.

GovAI researchers have published in top journals and conferences, including International Organization, NeurIPS, and Nature Machine Intelligence. Our alumni have gone on to research roles at top academic institutions, including the University of Oxford and the University of Cambridge, and top AI labs, including DeepMind and OpenAI.

As Talent Systems Specialist, you will report to Ryan (Director of Operations), and also work closely with Georg (Chief of Staff), Valerie (Research Manager), and others on GovAI’s talent systems and programs. Responsibilities will include:

  • Project management of many of GovAI’s hiring rounds, from outreach to offers, using the systems and processes already in place. This includes quickly understanding how the existing systems work, and leveraging them to ensure that the teams responsible for candidate evaluation have the information, direction, and tools they need to succeed.
  • Designing, implementing, and continuously improving the tools, systems, and processes we use in our recruiting. This includes systems for candidate identification, outreach to promising individuals, contending with the proliferation of AI-assisted applications, and running the end-to-end evaluation process. This work has high potential to directly shape our talent strategy by defining what is possible for the organisation.
  • Balancing multiple concurrent workstreams, prioritising based on impact and strategic alignment. Because this role will be simultaneously coordinating hiring rounds and improving the systems on which those rounds are run, the individual will need to progress multiple objectives in parallel and quickly adapt when priorities shift.
  • Collaborating with staff across GovAI to gather requirements, understand pain points, and create solutions that enhance efficiency and effectiveness across the organisation. We are particularly excited about individuals who will proactively suggest improvements to “the way we do things” and can challenge our assumptions.
  • Supporting the Director of Operations in maintaining and elevating the quality and professionalism of our recruiting and people operations.

Depending on your interests and skills, there is room for the role to grow in several directions, including:

  • Internal Product Manager, owning nearly all of GovAI’s internal systems, aiming to enable all of GovAI’s programmes to rapidly scale. This could include building and managing a CRM, research repository, website, virtual course platform, research dissemination tools, and more. A key focus would be using data to evaluate and improve the impact of our programmes.
  • Recruiting Lead, taking a deeper level of ownership over end-to-end talent selection. Responsibilities could include defining and scoping new roles, designing effective evaluations, coordinating internal and external stakeholders, and participating directly in candidate grading and hiring decisions. This role could also involve managing one or more direct reports on the Operations team who support hiring efforts.
  • Head of People, overseeing the complete GovAI staff experience beyond recruiting and onboarding. This would include developing processes for staff to assess and optimise their performance, fostering best practices for delivering effective feedback, helping team members realise ambitious professional development goals, and further strengthening GovAI’s culture and working practices.

At GovAI we believe there is no such thing as a perfect candidate and we don’t expect a successful hire to excel in all of the dimensions listed below. If you are hesitant to apply because you are unsure whether you are qualified or you worry your background doesn’t make you an obvious fit, we still strongly encourage you to apply.

We’re searching for candidates who are:

  • Highly organised and skilled in project management. This role involves managing complex, concurrent work streams. We are looking for someone who can demonstrate highly structured work habits, confidently prioritise tasks, and take a methodical approach to maintaining order and progress.
  • Proactive in identifying and driving improvements, from ideation through execution. This role should seek out opportunities to enhance our systems and processes, gather high-level guidance from senior stakeholders (e.g. GovAI leadership), and take initiative to build scalable solutions that meet the organisation’s evolving needs.
  • Good at working as part of a fast-moving team. GovAI is a small-but-growing organisation and most team members wear many hats. This role should be comfortable iterating, pivoting when priorities change, and ensuring their solutions can stand on their own when the team moves on to new challenges.
  • Excellent communicators, both verbally and in writing. This role requires clear and prompt communication with a wide range of stakeholders, often synthesising rapid or fragmented feedback into concrete solutions.
  • Adept at translating high-level goals into actionable plans, including defining project owners, setting up project plans, and overseeing their execution from start to finish.
  • Driven by excellence and a commitment to producing high-quality results. Successful candidates will actively seek out opportunities to improve their skills and maximise their impact.

Although not mandatory, the following qualities would make a candidate exceptionally promising:  

  • Experience in product management, user experience, systems architecture, or data design. Strong candidates might have a good understanding of how to build solutions that address complex needs, integrate with existing processes, and remain adaptable to future requirements.
  • Experience in recruiting, talent development, staff support, or people operations. Strong candidates might have experience with recruiting or designing talent search processes, or building HR/people-oriented programs and systems.
  • Strong interpersonal skills and leadership abilities. It would be considered a strength if this individual could provide effective line management for junior Operations team members and ensure the Operations team as a whole receives appropriate support and guidance.
  • Excited by the opportunity to use their careers to positively influence the lasting impact of artificial intelligence, in line with our organisation’s mission.

This position will be full-time, and managed by Ryan, GovAI’s Director of Operations. Our offices are located in the UK and we strongly prefer team members to be based in Oxford or London, although we are open to hiring individuals who work remotely, and Ryan is based in New York City. We are able to sponsor visas. 

This role will be compensated in line with our salary principles. As such, the salary for this role will depend on the successful applicant’s experience, but we expect the full-time range to be between £60,000 and £80,000 for candidates based in the UK. In rare cases where salary considerations would prevent a candidate from accepting an offer, there may also be some flexibility in compensation. 

Benefits associated with the role include health, dental, and vision insurance, flexible work hours, extended parental leave, ergonomic equipment, a 10% employer pension contribution, and 33 days of paid vacation (including public holidays). Based on location, the role may also offer a £5,000 annual wellbeing budget, a £1,500 annual commuting budget, and a relocation stipend.

The application process includes three stages: a written submission in the first round, a paid remote work test in the second round, and an interview in the final round. Please apply using the form linked below. 

We aim to fill this role as soon as possible and may begin reviewing applications before the deadline. Applications submitted earlier may be given additional consideration.

We also note that end-of-year hiring rounds have a higher risk of delays as our graders navigate competing holiday schedules. While we intend to reach a decision before the end of the year, we appreciate your patience if the final outcome is only reached in early 2025.

GovAI is committed to fostering a culture of inclusion and we encourage individuals with underrepresented perspectives and backgrounds to apply. We especially encourage applications from women, gender minorities, people of colour, and people from regions other than North America and Western Europe who are excited about contributing to our mission. We are an equal opportunity employer and want to make it as easy as possible for everyone who joins our team to thrive in our workplace. 

If you need assistance with the application due to a disability, or have any other questions about applying, please email

re*********@go********.ai











.



Source link

08Oct

Responsible AI Manager at KPMG Australia – Sydney, Australia


Job Description

Our Connected Technology Group (CTG) defines and drives the digital, data and technology strategy for KPMG. We have an important advocacy role for technology in the market and across KPMG, working with our technology leaders to build our market presence. We cultivate collaboration and integrate tech execution across our business, driving a firmwide approach to how we go-to-market and build the capability of our people and attract new talent. 

Trusted AI represents KPMG’s strategic approach and commitment to the responsible and ethical design, development, procurement, and use of artificial intelligence.  By implementing Trusted AI, we aim to accelerate value creation for our clients, the firm, and society, all while promoting trust and confidence in AI systems throughout every stage of their lifecycle. The KPMG Trusted AI Framework rests on three foundational principles of being values-driven, human-centric and trustworthy, supported by ten key ethical principles. The KPMG Australia Trusted AI Office is responsible for operationalising the firm’s Trusted AI Framework in line with emerging regulatory requirements, global standards and industry best practice.

Your Opportunity

The purpose of the Responsible AI Manager role is to support the operationalisation of best practice human-centred and trusted AI approaches in the development, deployment and monitoring of AI solutions and the contribution of those expertise to the firm’s overall Trusted AI governance approach.

Specific responsibilities:

  • Bring research-informed best practices on responsible AI to the design, development, and use of AI solutions by KPMG through guidance on the implementation of appropriate technical guardrails aligned with the Trusted AI Framework, regulation and best practice;
  • Define metrics for the design and testing of AI solutions against key Trusted AI pillars, including fairness, reliability, privacy, security and sustainability;
  • Actively engage with digital delivery teams to evaluate their upcoming product and project releases, ensuring alignment with Trusted AI principles and identification of potential risks;
  • Establish a deep understanding of generative AI technology, Digital FTE use cases and practical Trusted AI implementations;
  • Advise on the development of monitoring frameworks and tools for the identification and mitigation of AI risks;
  • Serve as a key contact with stakeholders in the provision of technical advice on implementation of the Trusted AI pillars;
  • Collaborate in the development and delivery of internal guidelines to support KPMG staff operationalise the Trusted AI approach based on role and responsibility; and
  • Keep up with evolving AI regulatory requirements relevant to KPMG and support the regular updating of processes and practice in line with those requirements;
  • Contribute to the continuous improvement and effectiveness of the firm’s Trusted AI governance structures; and
  • Provide input into client services strategies by defining and communicating best practice technical approaches to the operationalisation of Trusted AI.

How are you extraordinary?

  • Adaptability: Keeping pace with the rapidly changing AI industry and regulatory landscape requires a flexible mindset and a commitment to ongoing learning.
  • Communication: Articulating complex AI concepts clearly to both technical and non-technical stakeholders is key.
  • Collaboration: Forming effective partnerships with teams and guiding the integration of AI principles into projects is crucial.

Your Skills & Experience

  • Bachelor’s or Master’s degree in computer science, AI/ML, advanced analytics or related technical fields
  • 6-8 years’ experience
  • Experience in designing, developing and deploying AI solutions.
  • Experience in identifying, assessing and mitigating risks across the AI lifecycle
  • Experience in defining metrics, testing, and monitoring the implementation of responsible AI specifications within an AI solution.
  • Experience in working on cloud and on Data & AI technologies.
  • Knowledge of standard IT operations including MLOps, LLMOps and DevOps.
  • Working knowledge of Retrieval Augmented Generation (RAG) techniques and their application in AI systems
  • Demonstrated project management skills, including the ability to manage multiple projects simultaneously and deliver quality outputs on time.
  • Strong communication skills, both written and spoken, with the ability to engage with technical as well as non-technical audiences.

Additional Information

KPMG is a professional services firm with global outreach and deep sector experience. We work with clients across an array of industries to solve complex challenges, steer change and enable growth. 

Our people are what make KPMG the thriving workplace that it is and what sets us apart is that we know great minds think differently. Collaborate with a team of passionate, highly skilled professionals who’ve got your back. You’ll build relationships with unique and diverse colleagues who will provide you with the support you need to be your best and produce meaningful and impactful work in an inclusive, equitable culture.

At KPMG, you’ll take control over how you work. We’re embracing a new way of working in many ways, from offering flexible hours and locations to generous paid parental leave and career breaks. Our people enjoy a variety of exciting perks, including retail discounts, health and wellbeing initiatives, learning and growth opportunities, salary packaging options and more.

Diverse candidates have diverse needs. During your recruitment journey, information will be provided about adjustment requests. If you require additional support before submitting your application, please contact the Talent Support Team.

At KPMG every career is different, and we look forward to seeing how you grow with us.



Source link

08Oct

Demystifying Large Language Model Function Calling | by Cobus Greyling | Oct, 2024


Large Language Model (LLM) Function Calling enables models to interact directly with external functions and APIs, expanding their utility beyond language processing.

Before diving into demystifying LLM function calling, just a few considerations…

The term Large Language Model is increasingly seen as a general reference rather than a precise or technically accureate description.

Today, the term Foundation Models encompass a broader range of capabilities, including not only language but also vision and multimodal functionalities.

There are also specialised models like Small Language Models optimised for lightweight applications and Large Action Models, which are fine-tuned for structured outputs and agent-based tasks.

This evolution reflects the diversity in AI architectures, with models designed to meet specific needs across various domains and applications. As the landscape grows, terminology will likely continue to evolve.

When using the OpenAI API with function calling, the model itself does not run the functions.

Instead, it generates parameters for potential function calls.

Your application then decides how to handle these parameters, maintaining full control over whether to call the suggested function or take another action.

In AI language models, the introduction of functions adds a new layer of autonomy.

The function calling capability allows the model to independently determine whether a function call is needed to handle a particular task or if it should respond directly.

By doing so, the model dynamically selects the most suitable response strategy based on the context, enhancing both its adaptability and effectiveness.

This decision-making power introduces a more nuanced autonomy, enabling the model to switch seamlessly between execution and conversation.

In function calling with language models, the model operates autonomously to determine whether a specific function call is appropriate based on the request.

When it identifies a match, it transitions to a more structured approach, preparing data parameters needed for the function. This allows the language model to act as a mediator, enabling efficient function handling while maintaining flexibility in processing the request.

AI autonomy can be viewed on a spectrum, with varying levels of independence depending on the system’s design.

By integrating function calls within generative AI applications, we introduce not only structure but also an initial layer of autonomy.

This enables AI systems to assess and respond to specific requests with a degree of self-direction. As AI technology evolves, these levels of autonomy are expected to increase, allowing models to handle tasks with greater independence and sophistication.

Consequently, this progression will enhance AI’s capacity to manage complex functions autonomously.

From the Python application below, it can be seen that two functions are defined, for adding and another for subtracting.

These functions need not be as confined as in this simple illustrative example, it can break out to an API which is external.

You also see the schema which is defined for the functions and a description for each schema, together with a description for each input parameter.

pip install openai==0.28

import openai
import json

# Prompt user to input API key
api_key = input("Please enter your OpenAI API key: ")
openai.api_key = api_key

# Define the tools: an addition function and a subtraction function
def add_numbers(a, b):
return {"result": a + b}

def subtract_numbers(a, b):
return {"result": a - b}

# Define the function schema for OpenAI function calling
functions = [
{
"name": "add_numbers",
"description": "Add two numbers together",
"parameters": {
"type": "object",
"properties": {
"a": {
"type": "number",
"description": "The first number to add"
},
"b": {
"type": "number",
"description": "The second number to add"
}
},
"required": ["a", "b"]
}
},
{
"name": "subtract_numbers",
"description": "Subtract one number from another",
"parameters": {
"type": "object",
"properties": {
"a": {
"type": "number",
"description": "The number to subtract from"
},
"b": {
"type": "number",
"description": "The number to subtract"
}
},
"required": ["a", "b"]
}
}
]

# Define a function to handle the function calling based on the function name
def handle_function_call(function_name, arguments):
if function_name == "add_numbers":
return add_numbers(arguments['a'], arguments['b'])
elif function_name == "subtract_numbers":
return subtract_numbers(arguments['a'], arguments['b'])
else:
raise ValueError(f"Unknown function: {function_name}")

# Prompting the model with function calling
def call_gpt(prompt):
response = openai.ChatCompletion.create(
model="gpt-4-0613", # gpt-4-0613 supports function calling
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
],
functions=functions,
function_call="auto" # This allows the model to decide which function to call
)

# Prompting the model with function calling
def call_gpt(prompt):
response = openai.ChatCompletion.create(
model="gpt-4-0613", # gpt-4-0613 supports function calling
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
],
functions=functions,
function_call="auto" # This allows the model to decide which function to call
)

# Check if the model wants to call a function
message = response["choices"][0]["message"]
if "function_call" in message:
function_name = message["function_call"]["name"]
arguments = json.loads(message["function_call"]["arguments"])
result = handle_function_call(function_name, arguments)
print (function_name, arguments, result)
return f"Function called: {function_name}, Result: {result['result']}"
else:
return message["content"]

# Test the app
while True:
user_input = input("Enter a math problem (addition or subtraction) or 'exit' to quit: ")
if user_input.lower() == "exit":
break
response = call_gpt(user_input)
print(response)

Chief Evangelist @ Kore.ai | I’m passionate about exploring the intersection of AI and language. From Language Models, AI Agents to Agentic Applications, Development Frameworks & Data-Centric Productivity Tools, I share insights and ideas on how these technologies are shaping the future.

https://platform.openai.com/docs/guides/function-calling



Source link

08Oct

Marketing Analytics Team Lead at Manychat – Austin, Texas


WHO WE ARE 🌍

Manychat is a leading Chat Marketing platform. We help businesses engage with their customers on Instagram, Facebook Messenger, WhatsApp, and Telegram.

Manychat is a Meta Official Business Partner, backed by top investors, including Bessemer Venture Partners.

With 200 teammates across three global offices New York, Barcelona, and Yerevan Manychat helps more than one million businesses worldwide interact with billions of customers in real-time at scale.

No matter the use case — generating leads, increasing engagement, providing 24/7 customer support, accepting payments, and beyond Manychat helps businesses improve their ROI and grow faster.

WHO WE’RE LOOKING FOR 🌟

We are seeking a seasoned leader to manage and develop a marketing-focused domain within our Analytics team.

If you have extensive hands-on experience in Marketing Analytics, extraordinary communication and cross-collaboration skills, proven people and project management expertise, this could be the perfect role for you. We’re excited to meet someone who not only has the skills but also the energy to help us achieve outstanding results.

As the Marketing Analytics Team Lead, you’ll be an essential part of our robust and high-performing Analytics team, working closely with the Chief Marketing Officer and Chief Operations Officer.

WHAT YOUʼLL DO 🚀

The Marketing Analytics domain has three key focuses:

  1. Develop and improve our framework for Marketing Analytics.
  2. Implement and manage the marketing data and infrastructure roadmap.
  3. Drive data-informed decisions within the Marketing team and across the whole
    company.

As the Marketing Analytics Team Lead, you will:

  • Grow and develop the Marketing Analytics team, both from a technical and people standpoint.
  • Manage cross-functional projects across multiple areas, including Data Engineering, Marketing, Product, Product Marketing, Support, and more.
  • Plan and deliver insightful research to support data-driven decision-making.
  • Foster growth within your team’s expertise. 

3-6-9 months expectations:In the first 3 months, you will:

  • Fully onboard into the Analytics team and the Marketing Analytics domain.
  • Audit current tools, processes, and approaches within the Marketing team.
  • Review cross-functional collaboration with multiple teams across the company.
  • Collect expectations from key stakeholders and develop a prioritized
    roadmap for the next 6 months.

By 6 months, you will:

  • ○  Revise and implement processes within the Marketing Analytics team.
  • ○  Develop key deliverables within the Marketing Analytics domain, including Attribution, Reporting, and tools like forecasting, LTV modeling, and MMM.
  • Launch cross-functional collaboration within different areas of Marketing Analytics
  • Strengthen the team with new, relevant hires.

At the 9-month mark, you will:

  • Retrospectively assess the progress.
  • Plan the strategic development of the Marketing Analytics function, ensuring alignment with the overalll Analytics and Marketing teams.
  • Develop a prioritized roadmap for the next 12 months.

TO BE SUCCESSFUL IN THIS ROLE 💥

  • 7+ years of proven experience in marketing analytics, preferably in SaaS products, with at least 3+ years of a Senior or Team Lead role. 
  • Technical background and knowledge of mathematical statistics.
  • Excellent technical skills (SQL, Python, BI tools).
  • Ability to formulate relevant marketing hypotheses and test them.
  • Outstanding people management skills.
  • Extreme attention to detail and a strong work ethic

IT WOULD BE GREAT IF YOU HAVE 🤩

  • Previous experience in a fast-paced startup environment.

WHAT WE OFFER 🤗

Here’s how we care about your growth, well-being, and comfort:

  • Professional development budget for relevant conference tickets, training programs, or courses.
  • Flexible benefits package to customize your own perks.
  • Comprehensive health insurance for you, your partner, and your kids.
  • Hybrid format to split your time between the comforts of home and office.

Manychat is an Equal Opportunity Employer. Weʼre committed to building a diverse and inclusive team. We do not discriminate against qualified employees or applicants because of race, color, religion, gender identity, sex, sexual preference, sexual identity, pregnancy, national origin, ancestry, citizenship, age, marital status, physical disability, mental disability, medical condition, military status, or any other characteristic protected by local law or ordinance.
This commitment is also reflected through our candidate experience. If you have individual needs that may require an accommodation during the interview process, please indicate this in your application. We will do our best to provide assistance throughout your interview process to ensure youʼre set up for success.



Source link

08Oct

Specialist – SAP Analytics at Sanofi – Hyderabad


Specialist – SAP Analytics

  • Job Title: Specialist – SAP Analytics
  • Location: Hyderabad, India
  • Job type: Permanent, Full time
  • Working Hours: India

About Growing with us

We are seeking a highly skilled and experienced Specialist in SAP Analytics to join our team. As a specialist, you will play a vital role in building and configuring analytical reports in the areas of SAP Business Warehouse and Embedded Analytics (CDS Views), collaborating with other developers to ensure efficient and effective solutions.

The ideal candidate should possess a minimum of 8 years of SAP Analytics application expertise in SAP BW and Embedded Analytics (CDS Views). SAP Analytics Specialist will validate design and effort estimates, manage change requests, review requirements and specifications, develop analytical solutions, participate in projects, scope new demands, and provide business advisory on SAP solutions. They will focus on embedded analytics CDS views and SAP Business Warehouse reports, ensuring data quality, performance, and integration.

Main responsibilities:

Embedded Analytics CDS View Development:

  • Design and develop CDS views to provide data models for embedded analytics applications.
  • Collaborate with business users and data architects to define data requirements and create efficient CDS views.
  • Ensure data quality, consistency, and performance of CDS views.

SAP Analytics Report Development:

  • Create and maintain SAP Analytics reports using tools like SAC, AFO, Web Intelligence, and Lumira based on data models from SAP BW4HANA and/or Embedded Analytics.
  • Develop complex reports and dashboards to meet the analytical needs of business users.
  • Optimize report performance and improve data visualization for effective decision-making.

Data Integration:

  • Integrate data from various sources (e.g., S/4HANA, SAP ECC, external systems) into BW and CDS views.
  • Develop data extraction, transformation, and loading (ETL) processes using BW’s ETL capabilities and work with the Integration team to facilitate data sharing using IICS.

Data Modelling:

  • Design and implement data models in BW and CDS views to support analytical requirements using industry best practices.
  • Ensure data consistency and integrity throughout the data landscape.

Performance Optimization:

  • Analyse report performance and identify bottlenecks.
  • Implement optimization techniques to improve query performance and reduce system load.

Technical Support:

  • Provide technical support for embedded analytics and BW4HANA reporting solutions.
  • Troubleshoot issues and resolve problems related to data, reports, and performance.

Collaboration:

  • Work closely with business analysts, data architects, and developers to understand requirements and deliver effective solutions.
  • Collaborate with other teams to ensure data integration and consistency across the organization.

About you

Experience

  • 8+ years of SAP Analytics experience in total with 2+ years of experience doing ERP delivery in a business-oriented context, preferably in a global company.
  • Extensive experience in implementing and maintaining SAP BW4HANA, Embedded Analytics (CDS Views) & SAC, AFO, WEBI (or any other BI Tool).
  • Must have good knowledge of functional modules like FI, SCI, SD, MM, PP, PM, and eWM for SAP Analytics.  
  • ERP transformation experience in a global company 
  • Executive-level communication and engagement skills, both written and verbal 
  • Strong configuration experience, ability to determine when to use configuration vs. code as well as advanced troubleshooting skills. 
  • Ability to translate functional specifications into technical design documents, provide efforts and cost estimates, and manage delivery of the desired functionality. 
  • Maintain a high level of quality while working on complex problems under pressure and deadlines. 
  • Guide Data Migration teams with master data & transactional data loads. 
  • Project management experience; continuous improvement skills and mindset.
  • Experience with multi-geography, multi-tier service design and management. 
  • Deep understanding of Application and Technology Architecture. 
  • Knowledge of SAP S/4 HANA is a must in the Embedded Analytics(CDS Views) context. 
  • Knowledge of agile ways of working is a plus. 
  • Knowledge of current ERP software trends in their area of expertise is a plus.
  • Knowledge of Snowflake and IICS for integration is a plus.
  • Knowledge of AI ML capabilities is a plus.

Soft skills

  • Demonstrated conflict resolution & problem-solving skills in a global environment. 
  • Strong appetite to learn and discover. 
  • Adaptable and open to changes. 
  • Company-first mindset: able to put the interests of the company before their own or those of their teams if applicable. 
  • Excellent analytical skills: able to frame and formalize problem statements and formulate robust solution proposals clearly and concisely. 
  • Autonomous and Results-driven.
  • Role model our 4 values: teamwork, integrity, respect, courage 

 

Education

  • Bachelor’s Degree or equivalent in Information Technology or Engineering

Languages

  • Fluent spoken and written English

When joining our team, you will experience:

  • If you have a passion for SAP Analytics Industrial processes or Back Office and are looking for a challenging role where you can make a significant impact, we would love to hear from you.
  • An international work environment, in which you can develop your talent and realize ideas and innovations within a competent team.
  • Your own career path within Sanofi. Your professional and personal development will be supported purposefully.

Pursue progress, discover extraordinary

Better is out there. Better medications, better outcomes, better science. But progress doesn’t happen without people – people from different backgrounds, in different locations, doing different roles, all united by one thing: a desire to make miracles happen. So, let’s be those people. 
 
At Sanofi, we provide equal opportunities to all regardless of race, colour, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, or gender identity 
 
At Sanofi diversity and inclusion is foundational to how we operate and embedded in our Core Values. We recognize to truly tap into the richness diversity brings we must lead with inclusion and have a workplace where those differences can thrive and be leveraged to empower the lives of our colleagues, patients, and customers. We respect and celebrate the diversity of our people, their backgrounds and experiences and provide equal opportunity for all.

Pursue progress, discover extraordinary

Better is out there. Better medications, better outcomes, better science. But progress doesn’t happen without people – people from different backgrounds, in different locations, doing different roles, all united by one thing: a desire to make miracles happen. So, let’s be those people.

At Sanofi, we provide equal opportunities to all regardless of race, colour, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, ability or gender identity.

Watch our ALL IN video and check out our Diversity Equity and Inclusion actions at sanofi.com!



Source link

Protected by Security by CleanTalk