SIGN IN SIGN UP
LAION-AI / Open-Assistant UNCLAIMED

OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.

0 0 88 Python
from typing import Literal
import pydantic
from oasst_shared.schemas import inference
class GenerateStreamParameters(pydantic.BaseModel):
max_new_tokens: int = 1024
2023-03-19 15:39:31 +01:00
do_sample: bool = True
top_k: int | None = None
top_p: float | None = None
typical_p: float | None = None
temperature: float | None = None
repetition_penalty: float | None = None
seed: int | None = None
stop: list[str] = []
details: bool = True
Basic implementation of an plugin system for OA (#2765) # the plugins Hi, this is my first PR here, but I was somewhat active on other fronts of OA development. This pr will bring some basic plugin functionality to the Open Assistant, and as discussed with @yk @andreaskoepf there are quite a few directions for something like this to be integrated with OA, but this should serve a purpose as some initial proof-of-concept and exploratory feature for 3rd party integrations with OA. I also included a small **calculator** plugin as a possible candidate for the OA internal plugin, which would be like the default one, for people to try out and also as an example, of how one could implement own plugins. If we decide to include this plugin, there should be added a deployment/hosting mechanism for it. I will push a separate branch in the next couple of days, that will serve as an alternative to the approach, so we can A/B test it along with the new models (SFT-s/RLHF-s) I also tried to comment on every weird quirk or decision in code, so one could easily understand, and change it, but there are quite a few places, where a simple new line char or like " char in specific strings, could affect LLM performance in the plugin usage scenario. Will also try to push some documentation regarding plugin development, but there are already some useful comments in **calculator** plugin on what should be paid attention to. Here are some of the current UI changes introduced with this PR. <details> <summary>Plugin chooser component</summary> <img width="854" alt="Screenshot 2023-04-20 at 00 55 38" src="https://user-images.githubusercontent.com/13547364/233217078-d2e4e28f-36eb-451e-a655-1679188aed52.png"> </details> <details> <summary>Plugin execution details component</summary> <img width="824" alt="Screenshot 2023-04-19 at 21 40 38" src="https://user-images.githubusercontent.com/13547364/233216884-e69bcf9c-707f-43de-a52d-41db5d92c504.png"> <img width="744" alt="Screenshot 2023-04-19 at 21 40 56" src="https://user-images.githubusercontent.com/13547364/233217161-c114f5b9-881f-4476-a2b1-459179a9353e.png"> <img width="545" alt="Screenshot 2023-04-19 at 21 30 18" src="https://user-images.githubusercontent.com/13547364/233217187-17fb87e5-e4be-43e4-96ac-7cdd84223147.png"> </details> <details open> <summary>Plugin assisted answer</summary> <img width="837" alt="Screenshot 2023-04-19 at 21 29 52" src="https://user-images.githubusercontent.com/13547364/233217260-4986f456-efa5-47a5-aabc-926a8a5a9a2f.png"> <img width="943" alt="Screenshot 2023-04-21 at 18 28 45" src="https://user-images.githubusercontent.com/13547364/233687877-0d0f9ffb-b16a-48de-96ad-e4c3a02f4c66.png"> </details> <details> <summary>Verified plugin usage UI look</summary> <img width="864" alt="Screenshot 2023-04-20 at 15 08 36" src="https://user-images.githubusercontent.com/13547364/233376402-52ed5a3d-631a-4350-9130-61548a8d7b02.png"> </details> <details> <summary>Some plugin usage examples</summary> <img width="1048" alt="Screenshot 2023-04-18 at 01 57 33" src="https://user-images.githubusercontent.com/13547364/233217685-79b262bd-81fd-4641-9ad9-110e8b689e42.png"> <img width="993" alt="Screenshot 2023-04-17 at 23 17 35" src="https://user-images.githubusercontent.com/13547364/233217687-561773a1-b16a-49f5-bdbc-f30b46bed33d.png"> </details> <details open> <summary>Mixed usage example where model chooses not to use plugin on its own</summary> <img width="690" alt="Screenshot 2023-04-20 at 21 31 46" src="https://user-images.githubusercontent.com/13547364/233469420-25c72893-7c7a-426c-9f4b-ce9144d643ae.png"> </details> --------- Co-authored-by: agi <you@example.com> Co-authored-by: Yannic Kilcher <yk@users.noreply.github.com> Co-authored-by: Oliver Stanley <olivergestanley@gmail.com>
2023-05-02 10:21:12 +02:00
plugins: list[inference.PluginEntry] = pydantic.Field(default_factory=list[inference.PluginEntry])
@staticmethod
def from_work_parameters(params: inference.WorkParameters) -> "GenerateStreamParameters":
return GenerateStreamParameters(
max_new_tokens=params.sampling_parameters.max_new_tokens,
do_sample=params.do_sample,
top_k=params.sampling_parameters.top_k,
top_p=params.sampling_parameters.top_p,
typical_p=params.sampling_parameters.typical_p,
temperature=params.sampling_parameters.temperature,
repetition_penalty=params.sampling_parameters.repetition_penalty,
seed=params.seed,
Basic implementation of an plugin system for OA (#2765) # the plugins Hi, this is my first PR here, but I was somewhat active on other fronts of OA development. This pr will bring some basic plugin functionality to the Open Assistant, and as discussed with @yk @andreaskoepf there are quite a few directions for something like this to be integrated with OA, but this should serve a purpose as some initial proof-of-concept and exploratory feature for 3rd party integrations with OA. I also included a small **calculator** plugin as a possible candidate for the OA internal plugin, which would be like the default one, for people to try out and also as an example, of how one could implement own plugins. If we decide to include this plugin, there should be added a deployment/hosting mechanism for it. I will push a separate branch in the next couple of days, that will serve as an alternative to the approach, so we can A/B test it along with the new models (SFT-s/RLHF-s) I also tried to comment on every weird quirk or decision in code, so one could easily understand, and change it, but there are quite a few places, where a simple new line char or like " char in specific strings, could affect LLM performance in the plugin usage scenario. Will also try to push some documentation regarding plugin development, but there are already some useful comments in **calculator** plugin on what should be paid attention to. Here are some of the current UI changes introduced with this PR. <details> <summary>Plugin chooser component</summary> <img width="854" alt="Screenshot 2023-04-20 at 00 55 38" src="https://user-images.githubusercontent.com/13547364/233217078-d2e4e28f-36eb-451e-a655-1679188aed52.png"> </details> <details> <summary>Plugin execution details component</summary> <img width="824" alt="Screenshot 2023-04-19 at 21 40 38" src="https://user-images.githubusercontent.com/13547364/233216884-e69bcf9c-707f-43de-a52d-41db5d92c504.png"> <img width="744" alt="Screenshot 2023-04-19 at 21 40 56" src="https://user-images.githubusercontent.com/13547364/233217161-c114f5b9-881f-4476-a2b1-459179a9353e.png"> <img width="545" alt="Screenshot 2023-04-19 at 21 30 18" src="https://user-images.githubusercontent.com/13547364/233217187-17fb87e5-e4be-43e4-96ac-7cdd84223147.png"> </details> <details open> <summary>Plugin assisted answer</summary> <img width="837" alt="Screenshot 2023-04-19 at 21 29 52" src="https://user-images.githubusercontent.com/13547364/233217260-4986f456-efa5-47a5-aabc-926a8a5a9a2f.png"> <img width="943" alt="Screenshot 2023-04-21 at 18 28 45" src="https://user-images.githubusercontent.com/13547364/233687877-0d0f9ffb-b16a-48de-96ad-e4c3a02f4c66.png"> </details> <details> <summary>Verified plugin usage UI look</summary> <img width="864" alt="Screenshot 2023-04-20 at 15 08 36" src="https://user-images.githubusercontent.com/13547364/233376402-52ed5a3d-631a-4350-9130-61548a8d7b02.png"> </details> <details> <summary>Some plugin usage examples</summary> <img width="1048" alt="Screenshot 2023-04-18 at 01 57 33" src="https://user-images.githubusercontent.com/13547364/233217685-79b262bd-81fd-4641-9ad9-110e8b689e42.png"> <img width="993" alt="Screenshot 2023-04-17 at 23 17 35" src="https://user-images.githubusercontent.com/13547364/233217687-561773a1-b16a-49f5-bdbc-f30b46bed33d.png"> </details> <details open> <summary>Mixed usage example where model chooses not to use plugin on its own</summary> <img width="690" alt="Screenshot 2023-04-20 at 21 31 46" src="https://user-images.githubusercontent.com/13547364/233469420-25c72893-7c7a-426c-9f4b-ce9144d643ae.png"> </details> --------- Co-authored-by: agi <you@example.com> Co-authored-by: Yannic Kilcher <yk@users.noreply.github.com> Co-authored-by: Oliver Stanley <olivergestanley@gmail.com>
2023-05-02 10:21:12 +02:00
plugins=params.plugins,
)
2023-03-19 15:39:31 +01:00
class GenerateStreamRequest(pydantic.BaseModel):
inputs: str
parameters: GenerateStreamParameters
class Token(pydantic.BaseModel):
text: str
logprob: float | None
id: int
def __len__(self) -> int:
return len(self.text)
def to_token_response(self, request_id: str) -> inference.TokenResponse:
return inference.TokenResponse(
request_id=request_id,
text=self.text,
log_prob=self.logprob,
token_id=self.id,
)
class StreamDetails(pydantic.BaseModel):
generated_tokens: int
seed: int | None
finish_reason: Literal["length", "eos_token", "stop_sequence"]
class GenerateStreamResponse(pydantic.BaseModel):
token: Token | None
generated_text: str | None
details: StreamDetails | None
error: str | None
@property
def is_end(self) -> bool:
return self.generated_text is not None
@property
def is_error(self) -> bool:
return self.error is not None