跳转到主要内容

标签(标签)

资源精选(342) Go开发(108) Go语言(103) Go(99) angular(82) LLM(75) 大语言模型(63) 人工智能(53) 前端开发(50) LangChain(43) golang(43) 机器学习(39) Go工程师(38) Go程序员(38) Go开发者(36) React(33) Go基础(29) Python(24) Vue(22) Web开发(20) Web技术(19) 精选资源(19) 深度学习(19) Java(18) ChatGTP(17) Cookie(16) android(16) 前端框架(13) JavaScript(13) Next.js(12) 安卓(11) 聊天机器人(10) typescript(10) 资料精选(10) NLP(10) 第三方Cookie(9) Redwoodjs(9) LLMOps(9) Go语言中级开发(9) 自然语言处理(9) PostgreSQL(9) 区块链(9) mlops(9) 安全(9) 全栈开发(8) ChatGPT(8) OpenAI(8) Linux(8) AI(8) GraphQL(8) iOS(8) 软件架构(7) Go语言高级开发(7) AWS(7) C++(7) 数据科学(7) whisper(6) Prisma(6) 隐私保护(6) RAG(6) JSON(6) DevOps(6) 数据可视化(6) wasm(6) 计算机视觉(6) 算法(6) Rust(6) 微服务(6) 隐私沙盒(5) FedCM(5) 语音识别(5) Angular开发(5) 快速应用开发(5) 提示工程(5) Agent(5) LLaMA(5) 低代码开发(5) Go测试(5) gorm(5) REST API(5) 推荐系统(5) WebAssembly(5) GameDev(5) CMS(5) CSS(5) machine-learning(5) 机器人(5) 游戏开发(5) Blockchain(5) Web安全(5) Kotlin(5) 低代码平台(5) 机器学习资源(5) Go资源(5) Nodejs(5) PHP(5) Swift(5) 智能体(4) devin(4) Blitz(4) javascript框架(4) Redwood(4) GDPR(4) 生成式人工智能(4) Angular16(4) Alpaca(4) 编程语言(4) SAML(4) JWT(4) JSON处理(4) Go并发(4) kafka(4) 移动开发(4) 移动应用(4) security(4) 隐私(4) spring-boot(4) 物联网(4) nextjs(4) 网络安全(4) API(4) Ruby(4) 信息安全(4) flutter(4) 专家智能体(3) Chrome(3) CHIPS(3) 3PC(3) SSE(3) 人工智能软件工程师(3) LLM Agent(3) Remix(3) Ubuntu(3) GPT4All(3) 软件开发(3) 问答系统(3) 开发工具(3) 最佳实践(3) RxJS(3) SSR(3) Node.js(3) Dolly(3) 移动应用开发(3) 低代码(3) IAM(3) Web框架(3) CORS(3) 基准测试(3) Go语言数据库开发(3) Oauth2(3) 并发(3) 主题(3) Theme(3) earth(3) nginx(3) 软件工程(3) azure(3) keycloak(3) 生产力工具(3) gpt3(3) 工作流(3) C(3) jupyter(3) 认证(3) prometheus(3) GAN(3) Spring(3) 逆向工程(3) 应用安全(3) Docker(3) Django(3) R(3) .NET(3) 大数据(3) Hacking(3) 渗透测试(3) C++资源(3) Mac(3) 微信小程序(3) Python资源(3) JHipster(3) 大型语言模型(2) 语言模型(2) 可穿戴设备(2) JDK(2) SQL(2) Apache(2) Hashicorp Vault(2) Spring Cloud Vault(2) Go语言Web开发(2) Go测试工程师(2) WebSocket(2) 容器化(2) AES(2) 加密(2) 输入验证(2) ORM(2) Fiber(2) Postgres(2) Gorilla Mux(2) Go数据库开发(2) 模块(2) 泛型(2) 指针(2) HTTP(2) PostgreSQL开发(2) Vault(2) K8s(2) Spring boot(2) R语言(2) 深度学习资源(2) 半监督学习(2) semi-supervised-learning(2) architecture(2) 普罗米修斯(2) 嵌入模型(2) productivity(2) 编码(2) Qt(2) 前端(2) Rust语言(2) NeRF(2) 神经辐射场(2) 元宇宙(2) CPP(2) 数据分析(2) spark(2) 流处理(2) Ionic(2) 人体姿势估计(2) human-pose-estimation(2) 视频处理(2) deep-learning(2) kotlin语言(2) kotlin开发(2) burp(2) Chatbot(2) npm(2) quantum(2) OCR(2) 游戏(2) game(2) 内容管理系统(2) MySQL(2) python-books(2) pentest(2) opengl(2) IDE(2) 漏洞赏金(2) Web(2) 知识图谱(2) PyTorch(2) 数据库(2) reverse-engineering(2) 数据工程(2) swift开发(2) rest(2) robotics(2) ios-animation(2) 知识蒸馏(2) 安卓开发(2) nestjs(2) solidity(2) 爬虫(2) 面试(2) 容器(2) C++精选(2) 人工智能资源(2) Machine Learning(2) 备忘单(2) 编程书籍(2) angular资源(2) 速查表(2) cheatsheets(2) SecOps(2) mlops资源(2) R资源(2) DDD(2) 架构设计模式(2) 量化(2) Hacking资源(2) 强化学习(2) flask(2) 设计(2) 性能(2) Sysadmin(2) 系统管理员(2) Java资源(2) 机器学习精选(2) android资源(2) android-UI(2) Mac资源(2) iOS资源(2) Vue资源(2) flutter资源(2) JavaScript精选(2) JavaScript资源(2) Rust开发(2) deeplearning(2) RAD(2)

category

 

自然语言处理(NLP)的兴起为开发人员提供了令人兴奋的机会,使他们能够创建引人入胜的交互式聊天机器人应用程序。

在这篇博客文章中,我们将指导您使用TypeScript编程语言从头开始创建聊天机器人应用程序。该项目将基于最新版本的前端Angular框架,我们将使用NestJS通过现有的Gemini API聊天和文本生成功能创建API服务。

What is Gemini API?

Gemini API为开发人员提供了一个强大的工具,用于构建通过自然语言与世界交互的应用程序。API实现充当到大型语言模型(LLM)的桥梁,允许您访问自然语言处理(NLP)功能。

有了Gemini,你可以:

  • 生成富有创意的文本:摘要、翻译、代码、模板电子邮件、信件等等。
  • 回答用户的问题:对复杂的查询提供深刻而全面的回答。
  • 参与对话:在有意义的背景下,参与自然感觉的讨论。

Gemini模型是为多模态从头开始构建的,这意味着它们可以帮助进行文本、图像、音频、视频和代码的推理。

Learn more about Gemini here.

Get an API Key for Gemini API

Before getting started with code, you’ll need to generate an API Key for the project. This needs to be done on Google AI Studio.

Once you get access to Google AI Studio, click on the “Get API Key” button on the left side to be redirected to the API keys view. Then, click on the “Create API Key” button to generate your first API key.

 
Google AI Studio: Create an API Key

As you may find on the same page, the quickest way to test the API using the brand-new API key is by running the following cURL command on your terminal:

curl \
  -H 'Content-Type: application/json' \
  -d '{"contents":[{"parts":[{"text":"Write a story about a magic backpack"}]}]}' \
  -X POST https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:generateContent?key=YOUR_API_KEY

Make sure to save the API key in a safe place since we’ll use it as an environment variable later.

⚠️ This API Key should not be versioned along with your source code! ⚠️

Create the Project Repository

In this case, we’ll use Nx to create the project repository from scratch. This will be a mono repository with the support of two applications: A client app based on Angular and a server app based on NestJS.

npx create-nx-workspace@latest --preset apps \
--name gemini-angular-nestjs \
--nxCloud skip

That’s a fast way to create an initial Nx workspace. You can see more options for create-nx-workspace here.

Create a Shared Model library

One practical way to define the shared code between apps is a new library. Let’s create a data-model library to define the common types for the project.

nx generate @nx/js:library --name=data-model \
 --unitTestRunner=none \
 --directory=libs/data-model \
 --importPath=data-models \
 --publishable=true \
 --projectNameAndRootFormat=as-provided \

Next, let’s define a chat-content.ts file under libs/data-model/src/lib as follows:

export interface ChatContent {
    agent: 'user' | 'chatbot';
    message: string;
}

That will be the base model to be used on the future Angular app and NestJS code.

Create the Server Application

It’s time to create the server folder as part of the new Nx workspace. Let’s enter the newly new created folder:

cd gemini-angular-nestjs

Next, let’s install the NestJS application schematics:

npm install --save-dev @nx/nest

Now we can move forward creating the NestJS application using the following command:

nx generate @nx/nest:application \
--name server \
--e2eTestRunner none 

The parameter name sets the name of the application where the NestJS code will be implemented. You can find other options for NestJS apps here.

After finishing the previous command, run the server application.

nx serve server

Set the Environment Variable

Create a .env file under the brand-new server folder. The environment file should have the API key you just created following the next example:

API_KEY=$YOUR_API_KEY

Creating the Chat and Text Services

Initially, we’ll add the support of text generation and multi-turn conversations(chat) using the Gemini API.

For a Node.js context, you need to install the GoogleGenerativeAI package:

npm install @google/generative-ai

Let’s create two services to handle both functionalities:

nx generate @nx/nest:service --name=chat
nx generate @nx/nest:service --name=text

Open the chat.service.ts file and put the following code:

import { Injectable } from '@nestjs/common';
import {
  ChatSession,
  GenerativeModel,
  GoogleGenerativeAI,
} from '@google/generative-ai';

import { ChatContent } from 'data-model';

@Injectable()
export class ChatService {
  model: GenerativeModel;
  chatSession: ChatSession;
  constructor() {
    const genAI = new GoogleGenerativeAI(process.env.API_KEY);
    this.model = genAI.getGenerativeModel({ model: 'gemini-pro' });
    this.chatSession = this.model.startChat({
      history: [
        {
          role: 'user',
          parts: `You're a poet. Respond to all questions with a rhyming poem.
            What is the capital of California?
          `,
        },
        {
          role: 'model',
          parts:
            'If the capital of California is what you seek, Sacramento is where you ought to peek.',
        },
      ],
    });
  }

  async chat(chatContent: ChatContent): Promise<ChatContent> {
    const result = await this.chatSession.sendMessage(chatContent.message);
    const response = await result.response;
    const text = response.text();

    return {
      message: text,
      agent: 'chatbot',
    };
  }
}

The class ChatService defines a model created through GoogleGenerativeAI constructor that needs to read the API Key.

The ChatSession object is needed to handle the multi-turn conversation. This object will store the conversation history for you. In order to initialize the chat, you can use the startChat() method and then set the initial context using the history property, which contains the first messages based on two roles: user and model.

The class method chat will take the chatContent object to register a new message from the user through sendMessage call. Then, the text value is extracted at the end before returning a ChatContent object for the client application.

Now, open the text.service.ts file and put the following content:

import { Injectable } from '@nestjs/common';
import {
    GenerativeModel,
    GoogleGenerativeAI,
  } from '@google/generative-ai';
import { ChatContent } from 'data-model';

@Injectable()
export class TextService {
    model: GenerativeModel;

    constructor() {
        const genAI = new GoogleGenerativeAI(process.env.API_KEY);
        this.model = genAI.getGenerativeModel({ model: "gemini-pro"});
    }

    async generateText(message: string): Promise<ChatContent> {
        const result = await this.model.generateContent(message);
        const response = await result.response;
        const text = response.text();
    
        return {
          message: text,
          agent: 'chatbot',
        };
      }
}

The TextService class creates the model property through the GoogleGenerativeAI class that requires the API key. Observe that the gemini-pro model is used here too since it’s optimized for multi-turn converstaions and text-only input as the use case.

The generateText method will take the chatContent object to extract the text message and use the generateContent method from the model.

For the text generation use case, there’s no need to handle history as we did for the multi-turn conversations(chat service).

Update the App Controller

We already defined the code that uses the Gemini model for text generation and chat. Let’s create the POST endpoints for the backend application.

To do that, open the app.controller.ts file and set the following content:

// app.controller.ts

import { Controller, Get, Post, Body } from '@nestjs/common';

import { ChatContent } from 'data-model';
import { ChatService } from './chat.service';
import { TextService } from './text.service';

@Controller()
export class AppController {
  
  constructor(private readonly chatService: ChatService, private readonly textService: TextService) {}

  @Post('chat')
  chat(@Body() chatContent: ChatContent) {
    return this.chatService.chat(chatContent);
  }
  
  @Post('text')
  text(@Body() chatContent: ChatContent) {
    return this.textService.generateText(chatContent.message);
  }
}

The AppController class injects the ChatService and the TextService we created before. Then, it uses the @Post decorators to create the /api/chat and the /api/text endpoints.

Create the Client Application

For the frontend application, we’ll need to create a client folder as part of the Nx workspace. First, let’s install the Angular application schematics:

npm install --save-dev @nx/angular

Now, let’s create the Angular app with the following command:

nx generate @nx/angular:application client \
--style scss \
--prefix corp \
--routing \
--skipTests true \
--ssr false \
--bundler esbuild \
--e2eTestRunner none

The client application will use SCSS for styling and set the prefix for components as corp. You may find more details about the all options available here.

You can run the client application as follows:

nx serve client

Adding Angular Material

Let’s implement the UI using Angular Material components. We can add that support running the following commands:

npm install @angular/material
nx g @angular/material:ng-add --project=client

The latest command will perform changes on the project configuration while setting up the needed styles for Angular Material. Pay attention to the output to understand what changes were made in the project.

Create the Gemini Service

Before starting to implement the components, let’s create an Angular service to manage the HTTP communication for the client app.

nx generate @schematics/angular:service --name=gemini --project=client --skipTests=true

Open the gemini.service.ts file and put the following code:

import { Injectable } from '@angular/core';
import { HttpClient } from '@angular/common/http';
import { Observable } from 'rxjs';
import { ClientChatContent } from './client-chat-content';

@Injectable({
  providedIn: 'root',
})
export class GeminiService {
  constructor(private httpClient: HttpClient) { }

  chat(chatContent: ClientChatContent): Observable<ClientChatContent> {
    return this.httpClient.post<ClientChatContent>('http://localhost:3000/api/chat', chatContent);
  }

  generateText(message: string): Observable<ClientChatContent> {
    return this.httpClient.post<ClientChatContent>('http://localhost:3000/api/text', {message});
  }
}

The GeminiService class has two methods: chat and generateText.

  • chat is used to send a chat message to the server. The chat message is sent as the body of the request.
  • generateText is used to generate text based on a given prompt.

Create the Chat and Text Components

The Chat Component

It’s time to create the components needed for the text generation and the chat. Let’s start with the chat component:

npx nx generate @nx/angular:component \
--name=chat \
--directory=chat 
--skipTests=true \
--style=scss

Open the chat.component.ts file and set the following TypeScript code:

import { Component } from '@angular/core';
import { CommonModule } from '@angular/common';
import { FormsModule } from '@angular/forms';

import { MatIconModule } from '@angular/material/icon';
import { MatInputModule } from '@angular/material/input';
import { MatButtonModule } from '@angular/material/button';
import { MatFormFieldModule } from '@angular/material/form-field';

import { GeminiService } from '../gemini.service';
import { LineBreakPipe } from '../line-break.pipe';
import { finalize } from 'rxjs';
import { ClientChatContent } from '../client-chat-content';


@Component({
  selector: 'corp-chat',
  standalone: true,
  imports: [
    CommonModule,
    MatIconModule,
    MatInputModule,
    MatButtonModule,
    MatFormFieldModule,
    FormsModule,
    LineBreakPipe,
  ],
  templateUrl: './chat.component.html',
  styleUrls: ['./chat.component.scss']
})
export class ChatComponent {
  message = '';

  contents: ClientChatContent[] = [];

  constructor(private geminiService: GeminiService) {}

  sendMessage(message: string): void {
    const chatContent: ClientChatContent = {
      agent: 'user',
      message,
    };

    this.contents.push(chatContent);
    this.contents.push({
      agent: 'chatbot',
      message: '...',
      loading: true,
    });
    
    this.message = '';
    this.geminiService
      .chat(chatContent)
      .pipe(
        finalize(() => {
          const loadingMessageIndex = this.contents.findIndex(
            (content) => content.loading
          );
          if (loadingMessageIndex !== -1) {
            this.contents.splice(loadingMessageIndex, 1);
          }
        })
      )
      .subscribe((content) => {
        this.contents.push(content);
      });
  }
}

The ChatComponent class defines two properties: message and contents. The input field defined in the template will bind to the message property since it’s a string value. The constructor of the class injects an instance of GeminiService into the component. The GeminiService class is used to handle the chat functionality.

The sendMessage method is used to send a message to the API and a loading message is rendered while the request is in progress. Once a message is sent, it is added to the chat history.

Once the TypeScript logic is set, open the chat.component.html file and put the following code:

<div class="chat-container">
  <div class="message-container" *ngIf="contents.length === 0">
    <p class="message">
      Welcome to your Gemini ChatBot App <br />
      Write a text to start.
    </p>
  </div>
  <div
    *ngFor="let content of contents"
    class="chat-message"
    [ngClass]="content.agent"
  >
    <img [src]="'assets/avatar-' + content.agent + '.png'" class="avatar" />
    <div class="message-details">
      <p
        class="message-content"
        [ngClass]="{ loading: content.loading }"
        [innerHTML]="content.message | lineBreak"
      ></p>
    </div>
  </div>
</div>

<div class="chat-footer-container">
  <mat-form-field class="chat-input">
    <input
      placeholder="Send a message"
      matInput
      #inputMessage
      [(ngModel)]="message"
      (keyup.enter)="sendMessage(message)"
    />
  </mat-form-field>
  <button mat-icon-button color="primary" (click)="sendMessage(message)">
    <mat-icon>send</mat-icon>
  </button>
</div>

The template defines the HTML structure and layout for the chat interface. A welcome message is displayed when there are no messages yet.

The main chat container section will display each message through the ngFor directive based on the contents array.

The chat footer container is the place where the user can input and send messages.

 
Gemini and a Chatbot Application

The Text Component

Let’s create the component for the text generation:

npx nx generate @nx/angular:component \
--name=text \
--directory=text 
--skipTests=true \
--style=scss

Open the text.component.ts file and put the following code:

import { Component } from '@angular/core';
import { CommonModule } from '@angular/common';
import { FormsModule } from '@angular/forms';

import { MatIconModule } from '@angular/material/icon';
import { MatInputModule } from '@angular/material/input';
import { MatButtonModule } from '@angular/material/button';
import { MatFormFieldModule } from '@angular/material/form-field';

import { MarkdownModule } from 'ngx-markdown';

import { GeminiService } from '../gemini.service';
import { ClientChatContent } from '../client-chat-content';
import { LineBreakPipe } from '../line-break.pipe';
import { finalize } from 'rxjs';


@Component({
  selector: 'corp-text',
  standalone: true,
  imports: [
    CommonModule,
    MatIconModule,
    MatInputModule,
    MatButtonModule,
    MatFormFieldModule,
    FormsModule,
    LineBreakPipe,
    MarkdownModule
  ],
  templateUrl: './text.component.html',
  styleUrls: ['./text.component.scss']
})
export class TextComponent {
  message = '';
  contents: ClientChatContent[] = [];

  constructor(private geminiService: GeminiService) {}

  generateText(message: string): void {
    const chatContent: ClientChatContent = {
      agent: 'user',
      message,
    };

    this.contents.push(chatContent);
    this.contents.push({
      agent: 'chatbot',
      message: '...',
      loading: true,
    });

    this.message = '';
    this.geminiService
      .generateText(message)
      .pipe(
        finalize(() => {
          const loadingMessageIndex = this.contents.findIndex(
            (content) => content.loading
          );
          if (loadingMessageIndex !== -1) {
            this.contents.splice(loadingMessageIndex, 1);
          }
        })
      )
      .subscribe((content) => {
        this.contents.push(content);
      });
  }
}

The previous code defines an Angular component called TextComponent. It has two properties message and contents as the ChatComponent does. The behavior is similar and the only difference here is the generateText method is used to generate text based on a given message(user’s prompt).

Next, open the text.component.html file and set the following code:

<div class="chat-container">
  <div class="message-container" *ngIf="contents.length === 0">
    <p class="message">
      Welcome to your Gemini App <br />
      Write an instruction to start.
    </p>
  </div>
  <div
    *ngFor="let content of contents"
    class="chat-message"
    [ngClass]="content.agent"
  >
    <img [src]="'assets/avatar-' + content.agent + '.png'" class="avatar" />
    <div class="message-details">
      <p *ngIf="content.loading"
        class="message-content"
        [ngClass]="{ loading: content.loading }"
        [innerHTML]="content.message | lineBreak"
      ></p>
      <markdown *ngIf="!content.loading"
        class="variable-binding message-content"
        [data]="content.message"
      ></markdown>
    </div>
  </div>
</div>

<div class="chat-footer-container">
  <mat-form-field class="chat-input">
    <mat-label>Send a message</mat-label>
    <textarea
      matInput
      #inputMessage
      [(ngModel)]="message"
      (keyup.enter)="generateText(message)"
    ></textarea>
  </mat-form-field>
  <button mat-icon-button color="primary" (click)="generateText(message)">
    <mat-icon>send</mat-icon>
  </button>
</div>

This template is very similar to the Chat component. However, it uses the markdown element to render code and text formatting that may come as part of the generated text.

To make it work, we’ll need to install the ngx-markdown and marked packages:

npm install --save ngx-markdown marked

Also, you will need to update the app.config.ts file and import the MarkdownModule support:


import { MarkdownModule } from 'ngx-markdown';

export const appConfig: ApplicationConfig = {
  providers: [
    provideRouter(appRoutes), 
    provideAnimationsAsync(),
    provideHttpClient(),
    importProvidersFrom([
      MarkdownModule.forRoot()
  ])
  ],
};
 
Gemini and a Text generation Application

Source Code of the Project

Find the complete project in this GitHub repository: gemini-angular-nestjs. Do not forget to give it a star ⭐️ and play around with the code.

Conclusion

In this step-by-step tutorial, we demonstrated how to build a web application from scratch with the ability to generate text and have multi-turn conversations using the Gemini API. The Nx workspace is ready to add shared code, or even add more applications as the project grows.